00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 600 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3266 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.160 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.161 The recommended git tool is: git 00:00:00.162 using credential 00000000-0000-0000-0000-000000000002 00:00:00.163 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.205 Fetching changes from the remote Git repository 00:00:00.207 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.243 Using shallow fetch with depth 1 00:00:00.243 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.243 > git --version # timeout=10 00:00:00.271 > git --version # 'git version 2.39.2' 00:00:00.271 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.591 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.602 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.613 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.613 > git config core.sparsecheckout # timeout=10 00:00:07.624 > git read-tree -mu HEAD # timeout=10 00:00:07.641 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.659 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.659 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.736 [Pipeline] Start of Pipeline 00:00:07.751 [Pipeline] library 00:00:07.753 Loading library shm_lib@master 00:00:07.753 Library shm_lib@master is cached. Copying from home. 00:00:07.768 [Pipeline] node 00:00:07.776 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.779 [Pipeline] { 00:00:07.791 [Pipeline] catchError 00:00:07.792 [Pipeline] { 00:00:07.807 [Pipeline] wrap 00:00:07.817 [Pipeline] { 00:00:07.826 [Pipeline] stage 00:00:07.828 [Pipeline] { (Prologue) 00:00:08.013 [Pipeline] sh 00:00:08.302 + logger -p user.info -t JENKINS-CI 00:00:08.325 [Pipeline] echo 00:00:08.327 Node: GP11 00:00:08.336 [Pipeline] sh 00:00:08.648 [Pipeline] setCustomBuildProperty 00:00:08.662 [Pipeline] echo 00:00:08.664 Cleanup processes 00:00:08.671 [Pipeline] sh 00:00:08.960 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.960 2171582 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.978 [Pipeline] sh 00:00:09.273 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.273 ++ grep -v 'sudo pgrep' 00:00:09.273 ++ awk '{print $1}' 00:00:09.273 + sudo kill -9 00:00:09.273 + true 00:00:09.292 [Pipeline] cleanWs 00:00:09.303 [WS-CLEANUP] Deleting project workspace... 00:00:09.304 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.311 [WS-CLEANUP] done 00:00:09.317 [Pipeline] setCustomBuildProperty 00:00:09.339 [Pipeline] sh 00:00:09.626 + sudo git config --global --replace-all safe.directory '*' 00:00:09.757 [Pipeline] httpRequest 00:00:09.794 [Pipeline] echo 00:00:09.796 Sorcerer 10.211.164.101 is alive 00:00:09.805 [Pipeline] httpRequest 00:00:09.811 HttpMethod: GET 00:00:09.812 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.813 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.830 Response Code: HTTP/1.1 200 OK 00:00:09.831 Success: Status code 200 is in the accepted range: 200,404 00:00:09.831 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.804 [Pipeline] sh 00:00:15.089 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:15.103 [Pipeline] httpRequest 00:00:15.132 [Pipeline] echo 00:00:15.133 Sorcerer 10.211.164.101 is alive 00:00:15.139 [Pipeline] httpRequest 00:00:15.143 HttpMethod: GET 00:00:15.143 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:15.144 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:15.162 Response Code: HTTP/1.1 200 OK 00:00:15.162 Success: Status code 200 is in the accepted range: 200,404 00:00:15.163 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:13.404 [Pipeline] sh 00:01:13.696 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:16.254 [Pipeline] sh 00:01:16.543 + git -C spdk log --oneline -n5 00:01:16.544 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:16.544 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:16.544 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:16.544 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:16.544 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:16.564 [Pipeline] withCredentials 00:01:16.576 > git --version # timeout=10 00:01:16.589 > git --version # 'git version 2.39.2' 00:01:16.607 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:16.610 [Pipeline] { 00:01:16.620 [Pipeline] retry 00:01:16.622 [Pipeline] { 00:01:16.640 [Pipeline] sh 00:01:16.924 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:18.321 [Pipeline] } 00:01:18.344 [Pipeline] // retry 00:01:18.350 [Pipeline] } 00:01:18.371 [Pipeline] // withCredentials 00:01:18.382 [Pipeline] httpRequest 00:01:18.407 [Pipeline] echo 00:01:18.409 Sorcerer 10.211.164.101 is alive 00:01:18.418 [Pipeline] httpRequest 00:01:18.423 HttpMethod: GET 00:01:18.424 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:18.424 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:18.427 Response Code: HTTP/1.1 200 OK 00:01:18.427 Success: Status code 200 is in the accepted range: 200,404 00:01:18.428 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:22.824 [Pipeline] sh 00:01:23.101 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.010 [Pipeline] sh 00:01:25.290 + git -C dpdk log --oneline -n5 00:01:25.290 eeb0605f11 version: 23.11.0 00:01:25.290 238778122a doc: update release notes for 23.11 00:01:25.290 46aa6b3cfc doc: fix description of RSS features 00:01:25.290 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:25.290 7e421ae345 devtools: support skipping forbid rule check 00:01:25.301 [Pipeline] } 00:01:25.316 [Pipeline] // stage 00:01:25.322 [Pipeline] stage 00:01:25.323 [Pipeline] { (Prepare) 00:01:25.337 [Pipeline] writeFile 00:01:25.347 [Pipeline] sh 00:01:25.624 + logger -p user.info -t JENKINS-CI 00:01:25.637 [Pipeline] sh 00:01:25.920 + logger -p user.info -t JENKINS-CI 00:01:25.932 [Pipeline] sh 00:01:26.210 + cat autorun-spdk.conf 00:01:26.210 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.210 SPDK_TEST_NVMF=1 00:01:26.210 SPDK_TEST_NVME_CLI=1 00:01:26.210 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.210 SPDK_TEST_NVMF_NICS=e810 00:01:26.210 SPDK_TEST_VFIOUSER=1 00:01:26.210 SPDK_RUN_UBSAN=1 00:01:26.210 NET_TYPE=phy 00:01:26.210 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:26.210 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.218 RUN_NIGHTLY=1 00:01:26.222 [Pipeline] readFile 00:01:26.247 [Pipeline] withEnv 00:01:26.248 [Pipeline] { 00:01:26.262 [Pipeline] sh 00:01:26.546 + set -ex 00:01:26.546 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:26.546 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.546 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.546 ++ SPDK_TEST_NVMF=1 00:01:26.546 ++ SPDK_TEST_NVME_CLI=1 00:01:26.546 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.546 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.546 ++ SPDK_TEST_VFIOUSER=1 00:01:26.546 ++ SPDK_RUN_UBSAN=1 00:01:26.546 ++ NET_TYPE=phy 00:01:26.546 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:26.546 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.546 ++ RUN_NIGHTLY=1 00:01:26.546 + case $SPDK_TEST_NVMF_NICS in 00:01:26.546 + DRIVERS=ice 00:01:26.546 + [[ tcp == \r\d\m\a ]] 00:01:26.546 + [[ -n ice ]] 00:01:26.546 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:26.546 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.546 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:26.546 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.546 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.546 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.546 + true 00:01:26.546 + for D in $DRIVERS 00:01:26.546 + sudo modprobe ice 00:01:26.546 + exit 0 00:01:26.557 [Pipeline] } 00:01:26.578 [Pipeline] // withEnv 00:01:26.583 [Pipeline] } 00:01:26.603 [Pipeline] // stage 00:01:26.614 [Pipeline] catchError 00:01:26.616 [Pipeline] { 00:01:26.629 [Pipeline] timeout 00:01:26.630 Timeout set to expire in 50 min 00:01:26.631 [Pipeline] { 00:01:26.646 [Pipeline] stage 00:01:26.648 [Pipeline] { (Tests) 00:01:26.664 [Pipeline] sh 00:01:26.947 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.947 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.947 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.947 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.947 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.947 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.947 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.947 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.947 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.947 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.947 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.947 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.947 + source /etc/os-release 00:01:26.947 ++ NAME='Fedora Linux' 00:01:26.947 ++ VERSION='38 (Cloud Edition)' 00:01:26.947 ++ ID=fedora 00:01:26.947 ++ VERSION_ID=38 00:01:26.947 ++ VERSION_CODENAME= 00:01:26.947 ++ PLATFORM_ID=platform:f38 00:01:26.947 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.947 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.947 ++ LOGO=fedora-logo-icon 00:01:26.947 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.947 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.948 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.948 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.948 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.948 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.948 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.948 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.948 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.948 ++ SUPPORT_END=2024-05-14 00:01:26.948 ++ VARIANT='Cloud Edition' 00:01:26.948 ++ VARIANT_ID=cloud 00:01:26.948 + uname -a 00:01:26.948 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.948 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:27.885 Hugepages 00:01:27.885 node hugesize free / total 00:01:27.885 node0 1048576kB 0 / 0 00:01:27.885 node0 2048kB 0 / 0 00:01:27.885 node1 1048576kB 0 / 0 00:01:27.885 node1 2048kB 0 / 0 00:01:27.885 00:01:27.885 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.885 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:27.885 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:27.885 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:27.885 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:27.885 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:27.885 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:27.885 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:27.885 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:27.885 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:27.885 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:27.886 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:27.886 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:27.886 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:27.886 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:27.886 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:27.886 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:27.886 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.886 + rm -f /tmp/spdk-ld-path 00:01:27.886 + source autorun-spdk.conf 00:01:27.886 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.886 ++ SPDK_TEST_NVMF=1 00:01:27.886 ++ SPDK_TEST_NVME_CLI=1 00:01:27.886 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.886 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.886 ++ SPDK_TEST_VFIOUSER=1 00:01:27.886 ++ SPDK_RUN_UBSAN=1 00:01:27.886 ++ NET_TYPE=phy 00:01:27.886 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.886 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.886 ++ RUN_NIGHTLY=1 00:01:27.886 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.886 + [[ -n '' ]] 00:01:27.886 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.886 + for M in /var/spdk/build-*-manifest.txt 00:01:27.886 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.886 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.886 + for M in /var/spdk/build-*-manifest.txt 00:01:27.886 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.886 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.886 ++ uname 00:01:27.886 + [[ Linux == \L\i\n\u\x ]] 00:01:27.886 + sudo dmesg -T 00:01:27.886 + sudo dmesg --clear 00:01:27.886 + dmesg_pid=2172907 00:01:27.886 + [[ Fedora Linux == FreeBSD ]] 00:01:27.886 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.886 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.886 + sudo dmesg -Tw 00:01:27.886 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.886 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.886 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.886 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.886 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.886 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.886 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.886 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.886 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.886 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.886 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.886 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.886 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.886 Test configuration: 00:01:27.886 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.886 SPDK_TEST_NVMF=1 00:01:27.886 SPDK_TEST_NVME_CLI=1 00:01:27.886 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.886 SPDK_TEST_NVMF_NICS=e810 00:01:27.886 SPDK_TEST_VFIOUSER=1 00:01:27.886 SPDK_RUN_UBSAN=1 00:01:27.886 NET_TYPE=phy 00:01:27.886 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.886 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.886 RUN_NIGHTLY=1 03:33:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:27.886 03:33:46 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.886 03:33:46 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.886 03:33:46 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.886 03:33:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.886 03:33:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.886 03:33:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.886 03:33:46 -- paths/export.sh@5 -- $ export PATH 00:01:27.886 03:33:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.886 03:33:46 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:27.886 03:33:46 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:27.886 03:33:46 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720920826.XXXXXX 00:01:27.886 03:33:46 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720920826.GRM7ct 00:01:27.886 03:33:46 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:27.886 03:33:46 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:01:27.886 03:33:46 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.886 03:33:46 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:27.886 03:33:46 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.886 03:33:46 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.886 03:33:46 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:27.886 03:33:46 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:27.886 03:33:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.147 03:33:46 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:28.147 03:33:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.147 03:33:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.147 03:33:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.147 03:33:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.147 Sun Jul 14 01:33:46 AM UTC 2024 00:01:28.147 03:33:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.147 LTS-59-g4b94202c6 00:01:28.147 03:33:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.147 03:33:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.147 03:33:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.147 03:33:46 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:28.147 03:33:46 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:28.147 03:33:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.147 ************************************ 00:01:28.147 START TEST ubsan 00:01:28.147 ************************************ 00:01:28.147 03:33:46 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:28.147 using ubsan 00:01:28.147 00:01:28.147 real 0m0.000s 00:01:28.147 user 0m0.000s 00:01:28.147 sys 0m0.000s 00:01:28.147 03:33:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.147 03:33:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.147 ************************************ 00:01:28.147 END TEST ubsan 00:01:28.147 ************************************ 00:01:28.147 03:33:46 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:28.147 03:33:46 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:28.147 03:33:46 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:28.147 03:33:46 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:28.147 03:33:46 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:28.147 03:33:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.147 ************************************ 00:01:28.147 START TEST build_native_dpdk 00:01:28.147 ************************************ 00:01:28.147 03:33:46 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:28.147 03:33:46 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:28.147 03:33:46 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:28.147 03:33:46 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:28.147 03:33:46 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:28.147 03:33:46 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:28.147 03:33:46 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:28.147 03:33:46 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:28.147 03:33:46 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:28.147 03:33:46 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:28.147 03:33:46 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:28.147 03:33:46 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:28.147 03:33:46 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:28.147 03:33:46 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.147 03:33:46 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.147 03:33:46 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.147 03:33:46 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.147 03:33:46 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:28.147 eeb0605f11 version: 23.11.0 00:01:28.147 238778122a doc: update release notes for 23.11 00:01:28.147 46aa6b3cfc doc: fix description of RSS features 00:01:28.147 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:28.147 7e421ae345 devtools: support skipping forbid rule check 00:01:28.147 03:33:46 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:28.147 03:33:46 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:28.147 03:33:46 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:28.147 03:33:46 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:28.147 03:33:46 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:28.147 03:33:46 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:28.147 03:33:46 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:28.147 03:33:46 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:28.147 03:33:46 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.147 03:33:46 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:28.147 03:33:46 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:28.147 03:33:46 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:28.147 03:33:46 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:28.147 03:33:46 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:28.147 03:33:46 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:28.147 03:33:46 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:28.147 03:33:46 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:28.147 03:33:46 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:28.147 03:33:46 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:28.147 03:33:46 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:28.147 03:33:46 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:28.147 03:33:46 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:28.147 03:33:46 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:28.147 03:33:46 -- scripts/common.sh@343 -- $ case "$op" in 00:01:28.147 03:33:46 -- scripts/common.sh@344 -- $ : 1 00:01:28.147 03:33:46 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:28.147 03:33:46 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:28.147 03:33:46 -- scripts/common.sh@364 -- $ decimal 23 00:01:28.147 03:33:46 -- scripts/common.sh@352 -- $ local d=23 00:01:28.147 03:33:46 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:28.147 03:33:46 -- scripts/common.sh@354 -- $ echo 23 00:01:28.147 03:33:46 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:28.147 03:33:46 -- scripts/common.sh@365 -- $ decimal 21 00:01:28.147 03:33:46 -- scripts/common.sh@352 -- $ local d=21 00:01:28.147 03:33:46 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:28.147 03:33:46 -- scripts/common.sh@354 -- $ echo 21 00:01:28.147 03:33:46 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:28.147 03:33:46 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:28.147 03:33:46 -- scripts/common.sh@366 -- $ return 1 00:01:28.147 03:33:46 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:28.147 patching file config/rte_config.h 00:01:28.147 Hunk #1 succeeded at 60 (offset 1 line). 00:01:28.147 03:33:46 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:28.147 03:33:46 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:28.147 03:33:46 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:28.147 03:33:46 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:28.147 03:33:46 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:32.346 The Meson build system 00:01:32.346 Version: 1.3.1 00:01:32.346 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.346 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:32.346 Build type: native build 00:01:32.346 Program cat found: YES (/usr/bin/cat) 00:01:32.346 Project name: DPDK 00:01:32.346 Project version: 23.11.0 00:01:32.346 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:32.346 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:32.346 Host machine cpu family: x86_64 00:01:32.346 Host machine cpu: x86_64 00:01:32.346 Message: ## Building in Developer Mode ## 00:01:32.346 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:32.346 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:32.346 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:32.346 Program python3 found: YES (/usr/bin/python3) 00:01:32.346 Program cat found: YES (/usr/bin/cat) 00:01:32.346 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:32.346 Compiler for C supports arguments -march=native: YES 00:01:32.346 Checking for size of "void *" : 8 00:01:32.346 Checking for size of "void *" : 8 (cached) 00:01:32.346 Library m found: YES 00:01:32.346 Library numa found: YES 00:01:32.346 Has header "numaif.h" : YES 00:01:32.346 Library fdt found: NO 00:01:32.346 Library execinfo found: NO 00:01:32.346 Has header "execinfo.h" : YES 00:01:32.346 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:32.346 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:32.346 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:32.346 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:32.346 Run-time dependency openssl found: YES 3.0.9 00:01:32.346 Run-time dependency libpcap found: YES 1.10.4 00:01:32.346 Has header "pcap.h" with dependency libpcap: YES 00:01:32.346 Compiler for C supports arguments -Wcast-qual: YES 00:01:32.346 Compiler for C supports arguments -Wdeprecated: YES 00:01:32.346 Compiler for C supports arguments -Wformat: YES 00:01:32.346 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:32.346 Compiler for C supports arguments -Wformat-security: NO 00:01:32.346 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.346 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:32.346 Compiler for C supports arguments -Wnested-externs: YES 00:01:32.346 Compiler for C supports arguments -Wold-style-definition: YES 00:01:32.346 Compiler for C supports arguments -Wpointer-arith: YES 00:01:32.346 Compiler for C supports arguments -Wsign-compare: YES 00:01:32.346 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:32.346 Compiler for C supports arguments -Wundef: YES 00:01:32.346 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.346 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:32.346 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:32.346 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.346 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:32.346 Program objdump found: YES (/usr/bin/objdump) 00:01:32.346 Compiler for C supports arguments -mavx512f: YES 00:01:32.346 Checking if "AVX512 checking" compiles: YES 00:01:32.346 Fetching value of define "__SSE4_2__" : 1 00:01:32.346 Fetching value of define "__AES__" : 1 00:01:32.346 Fetching value of define "__AVX__" : 1 00:01:32.346 Fetching value of define "__AVX2__" : (undefined) 00:01:32.346 Fetching value of define "__AVX512BW__" : (undefined) 00:01:32.346 Fetching value of define "__AVX512CD__" : (undefined) 00:01:32.346 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:32.346 Fetching value of define "__AVX512F__" : (undefined) 00:01:32.346 Fetching value of define "__AVX512VL__" : (undefined) 00:01:32.346 Fetching value of define "__PCLMUL__" : 1 00:01:32.346 Fetching value of define "__RDRND__" : 1 00:01:32.346 Fetching value of define "__RDSEED__" : (undefined) 00:01:32.346 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:32.346 Fetching value of define "__znver1__" : (undefined) 00:01:32.346 Fetching value of define "__znver2__" : (undefined) 00:01:32.346 Fetching value of define "__znver3__" : (undefined) 00:01:32.346 Fetching value of define "__znver4__" : (undefined) 00:01:32.346 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:32.346 Message: lib/log: Defining dependency "log" 00:01:32.346 Message: lib/kvargs: Defining dependency "kvargs" 00:01:32.346 Message: lib/telemetry: Defining dependency "telemetry" 00:01:32.346 Checking for function "getentropy" : NO 00:01:32.346 Message: lib/eal: Defining dependency "eal" 00:01:32.346 Message: lib/ring: Defining dependency "ring" 00:01:32.346 Message: lib/rcu: Defining dependency "rcu" 00:01:32.346 Message: lib/mempool: Defining dependency "mempool" 00:01:32.346 Message: lib/mbuf: Defining dependency "mbuf" 00:01:32.346 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:32.346 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.346 Compiler for C supports arguments -mpclmul: YES 00:01:32.346 Compiler for C supports arguments -maes: YES 00:01:32.346 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.346 Compiler for C supports arguments -mavx512bw: YES 00:01:32.346 Compiler for C supports arguments -mavx512dq: YES 00:01:32.346 Compiler for C supports arguments -mavx512vl: YES 00:01:32.346 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:32.346 Compiler for C supports arguments -mavx2: YES 00:01:32.346 Compiler for C supports arguments -mavx: YES 00:01:32.346 Message: lib/net: Defining dependency "net" 00:01:32.346 Message: lib/meter: Defining dependency "meter" 00:01:32.346 Message: lib/ethdev: Defining dependency "ethdev" 00:01:32.346 Message: lib/pci: Defining dependency "pci" 00:01:32.346 Message: lib/cmdline: Defining dependency "cmdline" 00:01:32.346 Message: lib/metrics: Defining dependency "metrics" 00:01:32.346 Message: lib/hash: Defining dependency "hash" 00:01:32.346 Message: lib/timer: Defining dependency "timer" 00:01:32.346 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.346 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:32.346 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:32.346 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:32.346 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:32.346 Message: lib/acl: Defining dependency "acl" 00:01:32.346 Message: lib/bbdev: Defining dependency "bbdev" 00:01:32.346 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:32.346 Run-time dependency libelf found: YES 0.190 00:01:32.346 Message: lib/bpf: Defining dependency "bpf" 00:01:32.346 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:32.346 Message: lib/compressdev: Defining dependency "compressdev" 00:01:32.346 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:32.346 Message: lib/distributor: Defining dependency "distributor" 00:01:32.346 Message: lib/dmadev: Defining dependency "dmadev" 00:01:32.346 Message: lib/efd: Defining dependency "efd" 00:01:32.346 Message: lib/eventdev: Defining dependency "eventdev" 00:01:32.346 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:32.346 Message: lib/gpudev: Defining dependency "gpudev" 00:01:32.346 Message: lib/gro: Defining dependency "gro" 00:01:32.346 Message: lib/gso: Defining dependency "gso" 00:01:32.346 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:32.346 Message: lib/jobstats: Defining dependency "jobstats" 00:01:32.346 Message: lib/latencystats: Defining dependency "latencystats" 00:01:32.346 Message: lib/lpm: Defining dependency "lpm" 00:01:32.346 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.346 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:32.346 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:32.346 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:32.346 Message: lib/member: Defining dependency "member" 00:01:32.346 Message: lib/pcapng: Defining dependency "pcapng" 00:01:32.346 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:32.346 Message: lib/power: Defining dependency "power" 00:01:32.346 Message: lib/rawdev: Defining dependency "rawdev" 00:01:32.346 Message: lib/regexdev: Defining dependency "regexdev" 00:01:32.346 Message: lib/mldev: Defining dependency "mldev" 00:01:32.346 Message: lib/rib: Defining dependency "rib" 00:01:32.346 Message: lib/reorder: Defining dependency "reorder" 00:01:32.346 Message: lib/sched: Defining dependency "sched" 00:01:32.346 Message: lib/security: Defining dependency "security" 00:01:32.346 Message: lib/stack: Defining dependency "stack" 00:01:32.346 Has header "linux/userfaultfd.h" : YES 00:01:32.346 Has header "linux/vduse.h" : YES 00:01:32.346 Message: lib/vhost: Defining dependency "vhost" 00:01:32.346 Message: lib/ipsec: Defining dependency "ipsec" 00:01:32.346 Message: lib/pdcp: Defining dependency "pdcp" 00:01:32.346 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.346 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:32.346 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:32.346 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:32.346 Message: lib/fib: Defining dependency "fib" 00:01:32.346 Message: lib/port: Defining dependency "port" 00:01:32.346 Message: lib/pdump: Defining dependency "pdump" 00:01:32.346 Message: lib/table: Defining dependency "table" 00:01:32.346 Message: lib/pipeline: Defining dependency "pipeline" 00:01:32.346 Message: lib/graph: Defining dependency "graph" 00:01:32.346 Message: lib/node: Defining dependency "node" 00:01:33.722 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:33.722 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.722 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.722 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:33.722 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:33.722 Compiler for C supports arguments -Wno-unused-value: YES 00:01:33.722 Compiler for C supports arguments -Wno-format: YES 00:01:33.722 Compiler for C supports arguments -Wno-format-security: YES 00:01:33.722 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:33.722 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:33.722 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:33.722 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:33.722 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.722 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.722 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:33.722 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:33.722 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:33.722 Has header "sys/epoll.h" : YES 00:01:33.722 Program doxygen found: YES (/usr/bin/doxygen) 00:01:33.722 Configuring doxy-api-html.conf using configuration 00:01:33.722 Configuring doxy-api-man.conf using configuration 00:01:33.722 Program mandb found: YES (/usr/bin/mandb) 00:01:33.722 Program sphinx-build found: NO 00:01:33.722 Configuring rte_build_config.h using configuration 00:01:33.722 Message: 00:01:33.722 ================= 00:01:33.722 Applications Enabled 00:01:33.722 ================= 00:01:33.722 00:01:33.722 apps: 00:01:33.722 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:33.722 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:33.722 test-pmd, test-regex, test-sad, test-security-perf, 00:01:33.722 00:01:33.722 Message: 00:01:33.722 ================= 00:01:33.722 Libraries Enabled 00:01:33.722 ================= 00:01:33.722 00:01:33.722 libs: 00:01:33.722 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:33.723 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:33.723 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:33.723 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:33.723 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:33.723 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:33.723 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:33.723 00:01:33.723 00:01:33.723 Message: 00:01:33.723 =============== 00:01:33.723 Drivers Enabled 00:01:33.723 =============== 00:01:33.723 00:01:33.723 common: 00:01:33.723 00:01:33.723 bus: 00:01:33.723 pci, vdev, 00:01:33.723 mempool: 00:01:33.723 ring, 00:01:33.723 dma: 00:01:33.723 00:01:33.723 net: 00:01:33.723 i40e, 00:01:33.723 raw: 00:01:33.723 00:01:33.723 crypto: 00:01:33.723 00:01:33.723 compress: 00:01:33.723 00:01:33.723 regex: 00:01:33.723 00:01:33.723 ml: 00:01:33.723 00:01:33.723 vdpa: 00:01:33.723 00:01:33.723 event: 00:01:33.723 00:01:33.723 baseband: 00:01:33.723 00:01:33.723 gpu: 00:01:33.723 00:01:33.723 00:01:33.723 Message: 00:01:33.723 ================= 00:01:33.723 Content Skipped 00:01:33.723 ================= 00:01:33.723 00:01:33.723 apps: 00:01:33.723 00:01:33.723 libs: 00:01:33.723 00:01:33.723 drivers: 00:01:33.723 common/cpt: not in enabled drivers build config 00:01:33.723 common/dpaax: not in enabled drivers build config 00:01:33.723 common/iavf: not in enabled drivers build config 00:01:33.723 common/idpf: not in enabled drivers build config 00:01:33.723 common/mvep: not in enabled drivers build config 00:01:33.723 common/octeontx: not in enabled drivers build config 00:01:33.723 bus/auxiliary: not in enabled drivers build config 00:01:33.723 bus/cdx: not in enabled drivers build config 00:01:33.723 bus/dpaa: not in enabled drivers build config 00:01:33.723 bus/fslmc: not in enabled drivers build config 00:01:33.723 bus/ifpga: not in enabled drivers build config 00:01:33.723 bus/platform: not in enabled drivers build config 00:01:33.723 bus/vmbus: not in enabled drivers build config 00:01:33.723 common/cnxk: not in enabled drivers build config 00:01:33.723 common/mlx5: not in enabled drivers build config 00:01:33.723 common/nfp: not in enabled drivers build config 00:01:33.723 common/qat: not in enabled drivers build config 00:01:33.723 common/sfc_efx: not in enabled drivers build config 00:01:33.723 mempool/bucket: not in enabled drivers build config 00:01:33.723 mempool/cnxk: not in enabled drivers build config 00:01:33.723 mempool/dpaa: not in enabled drivers build config 00:01:33.723 mempool/dpaa2: not in enabled drivers build config 00:01:33.723 mempool/octeontx: not in enabled drivers build config 00:01:33.723 mempool/stack: not in enabled drivers build config 00:01:33.723 dma/cnxk: not in enabled drivers build config 00:01:33.723 dma/dpaa: not in enabled drivers build config 00:01:33.723 dma/dpaa2: not in enabled drivers build config 00:01:33.723 dma/hisilicon: not in enabled drivers build config 00:01:33.723 dma/idxd: not in enabled drivers build config 00:01:33.723 dma/ioat: not in enabled drivers build config 00:01:33.723 dma/skeleton: not in enabled drivers build config 00:01:33.723 net/af_packet: not in enabled drivers build config 00:01:33.723 net/af_xdp: not in enabled drivers build config 00:01:33.723 net/ark: not in enabled drivers build config 00:01:33.723 net/atlantic: not in enabled drivers build config 00:01:33.723 net/avp: not in enabled drivers build config 00:01:33.723 net/axgbe: not in enabled drivers build config 00:01:33.723 net/bnx2x: not in enabled drivers build config 00:01:33.723 net/bnxt: not in enabled drivers build config 00:01:33.723 net/bonding: not in enabled drivers build config 00:01:33.723 net/cnxk: not in enabled drivers build config 00:01:33.723 net/cpfl: not in enabled drivers build config 00:01:33.723 net/cxgbe: not in enabled drivers build config 00:01:33.723 net/dpaa: not in enabled drivers build config 00:01:33.723 net/dpaa2: not in enabled drivers build config 00:01:33.723 net/e1000: not in enabled drivers build config 00:01:33.723 net/ena: not in enabled drivers build config 00:01:33.723 net/enetc: not in enabled drivers build config 00:01:33.723 net/enetfec: not in enabled drivers build config 00:01:33.723 net/enic: not in enabled drivers build config 00:01:33.723 net/failsafe: not in enabled drivers build config 00:01:33.723 net/fm10k: not in enabled drivers build config 00:01:33.723 net/gve: not in enabled drivers build config 00:01:33.723 net/hinic: not in enabled drivers build config 00:01:33.723 net/hns3: not in enabled drivers build config 00:01:33.723 net/iavf: not in enabled drivers build config 00:01:33.723 net/ice: not in enabled drivers build config 00:01:33.723 net/idpf: not in enabled drivers build config 00:01:33.723 net/igc: not in enabled drivers build config 00:01:33.723 net/ionic: not in enabled drivers build config 00:01:33.723 net/ipn3ke: not in enabled drivers build config 00:01:33.723 net/ixgbe: not in enabled drivers build config 00:01:33.723 net/mana: not in enabled drivers build config 00:01:33.723 net/memif: not in enabled drivers build config 00:01:33.723 net/mlx4: not in enabled drivers build config 00:01:33.723 net/mlx5: not in enabled drivers build config 00:01:33.723 net/mvneta: not in enabled drivers build config 00:01:33.723 net/mvpp2: not in enabled drivers build config 00:01:33.723 net/netvsc: not in enabled drivers build config 00:01:33.723 net/nfb: not in enabled drivers build config 00:01:33.723 net/nfp: not in enabled drivers build config 00:01:33.723 net/ngbe: not in enabled drivers build config 00:01:33.723 net/null: not in enabled drivers build config 00:01:33.723 net/octeontx: not in enabled drivers build config 00:01:33.723 net/octeon_ep: not in enabled drivers build config 00:01:33.723 net/pcap: not in enabled drivers build config 00:01:33.723 net/pfe: not in enabled drivers build config 00:01:33.723 net/qede: not in enabled drivers build config 00:01:33.723 net/ring: not in enabled drivers build config 00:01:33.723 net/sfc: not in enabled drivers build config 00:01:33.723 net/softnic: not in enabled drivers build config 00:01:33.723 net/tap: not in enabled drivers build config 00:01:33.723 net/thunderx: not in enabled drivers build config 00:01:33.723 net/txgbe: not in enabled drivers build config 00:01:33.723 net/vdev_netvsc: not in enabled drivers build config 00:01:33.723 net/vhost: not in enabled drivers build config 00:01:33.723 net/virtio: not in enabled drivers build config 00:01:33.723 net/vmxnet3: not in enabled drivers build config 00:01:33.723 raw/cnxk_bphy: not in enabled drivers build config 00:01:33.723 raw/cnxk_gpio: not in enabled drivers build config 00:01:33.723 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:33.723 raw/ifpga: not in enabled drivers build config 00:01:33.723 raw/ntb: not in enabled drivers build config 00:01:33.723 raw/skeleton: not in enabled drivers build config 00:01:33.723 crypto/armv8: not in enabled drivers build config 00:01:33.723 crypto/bcmfs: not in enabled drivers build config 00:01:33.723 crypto/caam_jr: not in enabled drivers build config 00:01:33.723 crypto/ccp: not in enabled drivers build config 00:01:33.723 crypto/cnxk: not in enabled drivers build config 00:01:33.723 crypto/dpaa_sec: not in enabled drivers build config 00:01:33.723 crypto/dpaa2_sec: not in enabled drivers build config 00:01:33.723 crypto/ipsec_mb: not in enabled drivers build config 00:01:33.723 crypto/mlx5: not in enabled drivers build config 00:01:33.723 crypto/mvsam: not in enabled drivers build config 00:01:33.723 crypto/nitrox: not in enabled drivers build config 00:01:33.723 crypto/null: not in enabled drivers build config 00:01:33.723 crypto/octeontx: not in enabled drivers build config 00:01:33.723 crypto/openssl: not in enabled drivers build config 00:01:33.723 crypto/scheduler: not in enabled drivers build config 00:01:33.723 crypto/uadk: not in enabled drivers build config 00:01:33.723 crypto/virtio: not in enabled drivers build config 00:01:33.723 compress/isal: not in enabled drivers build config 00:01:33.723 compress/mlx5: not in enabled drivers build config 00:01:33.723 compress/octeontx: not in enabled drivers build config 00:01:33.723 compress/zlib: not in enabled drivers build config 00:01:33.723 regex/mlx5: not in enabled drivers build config 00:01:33.723 regex/cn9k: not in enabled drivers build config 00:01:33.723 ml/cnxk: not in enabled drivers build config 00:01:33.723 vdpa/ifc: not in enabled drivers build config 00:01:33.723 vdpa/mlx5: not in enabled drivers build config 00:01:33.723 vdpa/nfp: not in enabled drivers build config 00:01:33.723 vdpa/sfc: not in enabled drivers build config 00:01:33.723 event/cnxk: not in enabled drivers build config 00:01:33.723 event/dlb2: not in enabled drivers build config 00:01:33.723 event/dpaa: not in enabled drivers build config 00:01:33.723 event/dpaa2: not in enabled drivers build config 00:01:33.723 event/dsw: not in enabled drivers build config 00:01:33.723 event/opdl: not in enabled drivers build config 00:01:33.723 event/skeleton: not in enabled drivers build config 00:01:33.723 event/sw: not in enabled drivers build config 00:01:33.723 event/octeontx: not in enabled drivers build config 00:01:33.723 baseband/acc: not in enabled drivers build config 00:01:33.723 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:33.723 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:33.723 baseband/la12xx: not in enabled drivers build config 00:01:33.723 baseband/null: not in enabled drivers build config 00:01:33.723 baseband/turbo_sw: not in enabled drivers build config 00:01:33.723 gpu/cuda: not in enabled drivers build config 00:01:33.723 00:01:33.723 00:01:33.723 Build targets in project: 220 00:01:33.723 00:01:33.723 DPDK 23.11.0 00:01:33.723 00:01:33.723 User defined options 00:01:33.723 libdir : lib 00:01:33.723 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.723 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:33.723 c_link_args : 00:01:33.723 enable_docs : false 00:01:33.723 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:33.723 enable_kmods : false 00:01:33.723 machine : native 00:01:33.723 tests : false 00:01:33.723 00:01:33.723 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:33.723 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:33.723 03:33:52 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:33.723 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:33.723 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:33.723 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:33.724 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:33.724 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:33.724 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:33.987 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:33.987 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:33.987 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:33.987 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:33.987 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:33.987 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:33.987 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:33.987 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:33.987 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:33.987 [15/710] Linking static target lib/librte_kvargs.a 00:01:33.987 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:33.987 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:33.987 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:34.248 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.248 [20/710] Linking static target lib/librte_log.a 00:01:34.248 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:34.248 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.822 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.822 [24/710] Linking target lib/librte_log.so.24.0 00:01:34.822 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:34.822 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:34.822 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:34.822 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:34.822 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:34.822 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:34.822 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:34.822 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:34.822 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.084 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.084 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.084 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.084 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.084 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.084 [39/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.084 [40/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.084 [41/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.084 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.084 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.084 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.084 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.084 [46/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:35.084 [47/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.084 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.084 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.084 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.084 [51/710] Linking target lib/librte_kvargs.so.24.0 00:01:35.084 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.084 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.084 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.084 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.084 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.085 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.085 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.085 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.347 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.347 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.347 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.347 [63/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:35.347 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.347 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.608 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.608 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.608 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.608 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.608 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.608 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.608 [72/710] Linking static target lib/librte_pci.a 00:01:35.872 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.872 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.872 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.872 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.872 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.872 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.872 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.132 [80/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.132 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.132 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.132 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.132 [84/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:36.132 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.132 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.132 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.132 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.132 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:36.132 [90/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.132 [91/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:36.132 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:36.132 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.132 [94/710] Linking static target lib/librte_ring.a 00:01:36.132 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.132 [96/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.132 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.401 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.401 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.401 [100/710] Linking static target lib/librte_meter.a 00:01:36.401 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:36.401 [102/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:36.401 [103/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.401 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:36.401 [105/710] Linking static target lib/librte_telemetry.a 00:01:36.401 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.401 [107/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.401 [108/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.401 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:36.401 [110/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:36.661 [111/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:36.661 [112/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.661 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.661 [114/710] Linking static target lib/librte_eal.a 00:01:36.661 [115/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:36.661 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.661 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.661 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:36.922 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:36.922 [122/710] Linking static target lib/librte_net.a 00:01:36.922 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:36.922 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:36.922 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:36.922 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:36.922 [127/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:36.922 [128/710] Linking static target lib/librte_cmdline.a 00:01:36.922 [129/710] Linking static target lib/librte_mempool.a 00:01:36.923 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.184 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.184 [132/710] Linking target lib/librte_telemetry.so.24.0 00:01:37.184 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:37.184 [134/710] Linking static target lib/librte_cfgfile.a 00:01:37.184 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.184 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:37.184 [137/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:37.184 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:37.184 [139/710] Linking static target lib/librte_metrics.a 00:01:37.459 [140/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:37.459 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:37.459 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.459 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.459 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:37.459 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:37.719 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:37.719 [147/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:37.719 [148/710] Linking static target lib/librte_bitratestats.a 00:01:37.719 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:37.719 [150/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.719 [151/710] Linking static target lib/librte_rcu.a 00:01:37.719 [152/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.719 [153/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:37.719 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.978 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:37.978 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.978 [157/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.978 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.978 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:37.978 [160/710] Linking static target lib/librte_timer.a 00:01:37.978 [161/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:37.978 [162/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.978 [163/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.978 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.240 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.240 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:38.240 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:38.240 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:38.240 [169/710] Linking static target lib/librte_bbdev.a 00:01:38.240 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.241 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.502 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.502 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:38.502 [174/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.502 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.502 [176/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:38.502 [177/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:38.502 [178/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.502 [179/710] Linking static target lib/librte_compressdev.a 00:01:38.502 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:38.765 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:38.765 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:39.029 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:39.029 [184/710] Linking static target lib/librte_distributor.a 00:01:39.029 [185/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.029 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:39.029 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.294 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.295 [189/710] Linking static target lib/librte_dmadev.a 00:01:39.295 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:39.295 [191/710] Linking static target lib/librte_bpf.a 00:01:39.295 [192/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.295 [193/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:39.295 [194/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.295 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:39.295 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:39.295 [197/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:39.295 [198/710] Linking static target lib/librte_dispatcher.a 00:01:39.554 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:39.554 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:39.554 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:39.554 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:39.554 [203/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:39.554 [204/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.554 [205/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:39.554 [206/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:39.554 [207/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:39.554 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.554 [209/710] Linking static target lib/librte_gpudev.a 00:01:39.817 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:39.817 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:39.817 [212/710] Linking static target lib/librte_gro.a 00:01:39.817 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.817 [214/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.817 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:39.817 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.817 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:39.817 [218/710] Linking static target lib/librte_jobstats.a 00:01:40.078 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:40.078 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:40.078 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.078 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.078 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:40.343 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:40.343 [225/710] Linking static target lib/librte_latencystats.a 00:01:40.343 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:40.343 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:40.343 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.343 [229/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:40.343 [230/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:40.343 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:40.343 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:40.604 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:40.604 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:40.604 [235/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.604 [236/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:40.604 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:40.604 [238/710] Linking static target lib/librte_ip_frag.a 00:01:40.865 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:40.865 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.865 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.865 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:40.865 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.865 [244/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.168 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:41.168 [246/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:41.168 [247/710] Linking static target lib/librte_gso.a 00:01:41.168 [248/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.168 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:41.461 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.461 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:41.461 [252/710] Linking static target lib/librte_regexdev.a 00:01:41.461 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:41.461 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:41.461 [255/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:41.461 [256/710] Linking static target lib/librte_rawdev.a 00:01:41.461 [257/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:41.461 [258/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.461 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:41.461 [260/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:41.461 [261/710] Linking static target lib/librte_mldev.a 00:01:41.461 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:41.461 [263/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:41.462 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:41.462 [265/710] Linking static target lib/librte_efd.a 00:01:41.725 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:41.725 [267/710] Linking static target lib/librte_pcapng.a 00:01:41.725 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:41.725 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:41.725 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:41.725 [271/710] Linking static target lib/librte_stack.a 00:01:41.725 [272/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:41.725 [273/710] Linking static target lib/acl/libavx2_tmp.a 00:01:41.988 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:41.988 [275/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:41.988 [276/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.988 [277/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.988 [278/710] Linking static target lib/librte_lpm.a 00:01:41.988 [279/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.988 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.988 [281/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.988 [282/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.988 [283/710] Linking static target lib/librte_hash.a 00:01:41.988 [284/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:41.988 [285/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.988 [286/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.988 [287/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.247 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:42.247 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:01:42.247 [290/710] Linking static target lib/librte_acl.a 00:01:42.247 [291/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.247 [292/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.247 [293/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.247 [294/710] Linking static target lib/librte_reorder.a 00:01:42.247 [295/710] Linking static target lib/librte_power.a 00:01:42.516 [296/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.516 [297/710] Linking static target lib/librte_security.a 00:01:42.516 [298/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.516 [299/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.516 [300/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.516 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:42.780 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:42.780 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:42.780 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.780 [305/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.780 [306/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:42.780 [307/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.780 [308/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.780 [309/710] Linking static target lib/librte_mbuf.a 00:01:42.780 [310/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:42.780 [311/710] Linking static target lib/librte_rib.a 00:01:42.780 [312/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.780 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:43.042 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:43.042 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:43.042 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:43.042 [317/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.042 [318/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:43.304 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:43.304 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:43.304 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:43.304 [322/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:43.304 [323/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.304 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:43.304 [325/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:43.304 [326/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:43.304 [327/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.566 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.567 [329/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:43.567 [330/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.825 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:43.825 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:43.825 [333/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.085 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:44.085 [335/710] Linking static target lib/librte_eventdev.a 00:01:44.085 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:44.085 [337/710] Linking static target lib/librte_member.a 00:01:44.085 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:44.350 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.350 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:44.350 [341/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:44.350 [342/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:44.350 [343/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:44.350 [344/710] Linking static target lib/librte_cryptodev.a 00:01:44.350 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:44.350 [346/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:44.350 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:44.350 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:44.350 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:44.350 [350/710] Linking static target lib/librte_sched.a 00:01:44.350 [351/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.612 [352/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:44.612 [353/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:44.612 [354/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:44.612 [355/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:44.612 [356/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:44.612 [357/710] Linking static target lib/librte_fib.a 00:01:44.612 [358/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.612 [359/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:44.612 [360/710] Linking static target lib/librte_ethdev.a 00:01:44.875 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:44.875 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:44.875 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:44.875 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:44.875 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:44.875 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:45.133 [367/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.133 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:45.133 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:45.133 [370/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:45.133 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.133 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:45.396 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:45.396 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:45.396 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:45.396 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:45.396 [377/710] Linking static target lib/librte_pdump.a 00:01:45.659 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.659 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:45.659 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:45.659 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:45.659 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:45.659 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:45.660 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:45.921 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:45.921 [386/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.921 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.921 [388/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:45.921 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:45.921 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:45.921 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:45.921 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:46.187 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:46.187 [394/710] Linking static target lib/librte_ipsec.a 00:01:46.187 [395/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.187 [396/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:46.187 [397/710] Linking static target lib/librte_table.a 00:01:46.451 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:46.451 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:46.451 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:46.714 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:46.714 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.978 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.978 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:46.978 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:46.978 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:46.978 [407/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.978 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.978 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:46.978 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.978 [411/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.241 [412/710] Linking target lib/librte_eal.so.24.0 00:01:47.241 [413/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.241 [414/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.241 [415/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.241 [416/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:47.241 [417/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:47.241 [418/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:47.241 [419/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.241 [420/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:47.500 [421/710] Linking target lib/librte_ring.so.24.0 00:01:47.500 [422/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.500 [423/710] Linking target lib/librte_meter.so.24.0 00:01:47.500 [424/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.500 [425/710] Linking target lib/librte_pci.so.24.0 00:01:47.500 [426/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:47.763 [427/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:47.763 [428/710] Linking target lib/librte_timer.so.24.0 00:01:47.763 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:47.763 [430/710] Linking target lib/librte_rcu.so.24.0 00:01:47.763 [431/710] Linking target lib/librte_mempool.so.24.0 00:01:47.763 [432/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:47.763 [433/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:47.763 [434/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:47.763 [435/710] Linking target lib/librte_acl.so.24.0 00:01:47.763 [436/710] Linking target lib/librte_cfgfile.so.24.0 00:01:48.029 [437/710] Linking target lib/librte_dmadev.so.24.0 00:01:48.029 [438/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:48.029 [439/710] Linking target lib/librte_jobstats.so.24.0 00:01:48.029 [440/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:48.029 [441/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:48.029 [442/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:48.029 [443/710] Linking static target lib/librte_port.a 00:01:48.029 [444/710] Linking static target lib/librte_graph.a 00:01:48.029 [445/710] Linking target lib/librte_rawdev.so.24.0 00:01:48.029 [446/710] Linking target lib/librte_stack.so.24.0 00:01:48.029 [447/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.029 [448/710] Linking target lib/librte_rib.so.24.0 00:01:48.029 [449/710] Linking target lib/librte_mbuf.so.24.0 00:01:48.029 [450/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:48.029 [451/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:48.029 [452/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:48.029 [453/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.029 [454/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.029 [455/710] Linking static target drivers/librte_bus_vdev.a 00:01:48.029 [456/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:48.292 [457/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:48.292 [458/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:48.292 [459/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:48.292 [460/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:48.292 [461/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.292 [462/710] Linking target lib/librte_compressdev.so.24.0 00:01:48.292 [463/710] Linking target lib/librte_bbdev.so.24.0 00:01:48.292 [464/710] Linking target lib/librte_net.so.24.0 00:01:48.550 [465/710] Linking target lib/librte_cryptodev.so.24.0 00:01:48.550 [466/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.550 [467/710] Linking target lib/librte_distributor.so.24.0 00:01:48.550 [468/710] Linking target lib/librte_gpudev.so.24.0 00:01:48.551 [469/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:48.551 [470/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:48.551 [471/710] Linking target lib/librte_regexdev.so.24.0 00:01:48.551 [472/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.551 [473/710] Linking target lib/librte_reorder.so.24.0 00:01:48.551 [474/710] Linking target lib/librte_mldev.so.24.0 00:01:48.551 [475/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:48.551 [476/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.551 [477/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.551 [478/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.551 [479/710] Linking static target drivers/librte_bus_pci.a 00:01:48.551 [480/710] Linking target lib/librte_sched.so.24.0 00:01:48.551 [481/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:48.551 [482/710] Linking target lib/librte_fib.so.24.0 00:01:48.551 [483/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:48.551 [484/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:48.551 [485/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:48.813 [486/710] Linking target lib/librte_cmdline.so.24.0 00:01:48.813 [487/710] Linking target lib/librte_hash.so.24.0 00:01:48.813 [488/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:48.813 [489/710] Linking target lib/librte_security.so.24.0 00:01:48.813 [490/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.813 [491/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:48.813 [492/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:48.813 [493/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.813 [494/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:48.813 [495/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.813 [496/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:48.813 [497/710] Linking static target drivers/librte_mempool_ring.a 00:01:48.813 [498/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:49.078 [499/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.078 [500/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.078 [501/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:49.078 [502/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:49.078 [503/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:49.078 [504/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:49.078 [505/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:49.078 [506/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:49.078 [507/710] Linking target lib/librte_efd.so.24.0 00:01:49.078 [508/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:49.078 [509/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:49.078 [510/710] Linking target lib/librte_lpm.so.24.0 00:01:49.078 [511/710] Linking target lib/librte_member.so.24.0 00:01:49.078 [512/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:49.078 [513/710] Linking target lib/librte_ipsec.so.24.0 00:01:49.340 [514/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:49.340 [515/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:49.340 [516/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:49.340 [517/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.340 [518/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:49.340 [519/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:49.340 [520/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:49.340 [521/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:49.604 [522/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:49.604 [523/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:49.604 [524/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:49.867 [525/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:50.135 [526/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:50.135 [527/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:50.135 [528/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:50.135 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:50.135 [530/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:50.136 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:50.397 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:50.397 [533/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:50.397 [534/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:50.397 [535/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:50.657 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:50.657 [537/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:50.657 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:50.657 [539/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:50.657 [540/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:50.657 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:50.917 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:50.917 [543/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:51.213 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:51.213 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:51.213 [546/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:51.213 [547/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:51.213 [548/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:51.477 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:51.477 [550/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:51.477 [551/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:51.477 [552/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:51.477 [553/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:51.477 [554/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:51.477 [555/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:51.477 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:51.739 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:51.739 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:51.739 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:52.001 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:52.266 [561/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.266 [562/710] Linking target lib/librte_ethdev.so.24.0 00:01:52.266 [563/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:52.528 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:52.528 [565/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:52.528 [566/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:52.528 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:52.528 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:52.528 [569/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:52.528 [570/710] Linking target lib/librte_metrics.so.24.0 00:01:52.791 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:52.791 [572/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:52.791 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:52.791 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:52.791 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:52.791 [576/710] Linking target lib/librte_bpf.so.24.0 00:01:52.791 [577/710] Linking target lib/librte_gso.so.24.0 00:01:52.791 [578/710] Linking target lib/librte_gro.so.24.0 00:01:52.791 [579/710] Linking target lib/librte_eventdev.so.24.0 00:01:52.791 [580/710] Linking target lib/librte_ip_frag.so.24.0 00:01:52.791 [581/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:52.791 [582/710] Linking target lib/librte_pcapng.so.24.0 00:01:53.053 [583/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:53.053 [584/710] Linking target lib/librte_power.so.24.0 00:01:53.053 [585/710] Linking target lib/librte_bitratestats.so.24.0 00:01:53.053 [586/710] Linking target lib/librte_latencystats.so.24.0 00:01:53.053 [587/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:53.053 [588/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:53.053 [589/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:53.053 [590/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:53.053 [591/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:53.053 [592/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:53.053 [593/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:53.053 [594/710] Linking target lib/librte_dispatcher.so.24.0 00:01:53.053 [595/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:53.316 [596/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:53.316 [597/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.316 [598/710] Linking target lib/librte_port.so.24.0 00:01:53.316 [599/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:53.316 [600/710] Linking target lib/librte_pdump.so.24.0 00:01:53.316 [601/710] Linking target lib/librte_graph.so.24.0 00:01:53.316 [602/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:53.316 [603/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:53.316 [604/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:53.316 [605/710] Linking static target lib/librte_pdcp.a 00:01:53.316 [606/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:53.583 [607/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:53.583 [608/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:53.583 [609/710] Linking target lib/librte_table.so.24.0 00:01:53.583 [610/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:53.583 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:53.866 [612/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:53.866 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:53.866 [614/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.866 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:53.866 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:53.866 [617/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:53.866 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:53.866 [619/710] Linking target lib/librte_pdcp.so.24.0 00:01:54.162 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:54.162 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:54.162 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:54.162 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:54.162 [624/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:54.438 [625/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:54.438 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:54.438 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:54.438 [628/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:54.438 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:54.703 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:54.703 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:54.966 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:54.966 [633/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:54.966 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:55.226 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:55.226 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:55.226 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:55.226 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:55.226 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:55.226 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:55.226 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:55.226 [642/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:55.486 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:55.486 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:55.486 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:55.486 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:55.745 [647/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:55.745 [648/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:55.745 [649/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:56.004 [650/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:56.004 [651/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:56.004 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:56.263 [653/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:56.263 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:56.263 [655/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:56.263 [656/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:56.263 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:56.522 [658/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:56.522 [659/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:56.781 [660/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:56.781 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:56.781 [662/710] Linking static target drivers/librte_net_i40e.a 00:01:56.781 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:56.781 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:57.039 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:57.039 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:57.039 [667/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:57.297 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.297 [669/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:57.297 [670/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:57.555 [671/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:57.813 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:57.813 [673/710] Linking static target lib/librte_node.a 00:01:58.072 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:58.072 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.330 [676/710] Linking target lib/librte_node.so.24.0 00:01:59.265 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:59.523 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.781 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:01.155 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:02.091 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:07.355 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.078 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.078 [684/710] Linking static target lib/librte_vhost.a 00:02:46.078 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.078 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:48.618 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:48.618 [688/710] Linking static target lib/librte_pipeline.a 00:02:49.185 [689/710] Linking target app/dpdk-proc-info 00:02:49.443 [690/710] Linking target app/dpdk-test-flow-perf 00:02:49.443 [691/710] Linking target app/dpdk-pdump 00:02:49.443 [692/710] Linking target app/dpdk-dumpcap 00:02:49.443 [693/710] Linking target app/dpdk-test-cmdline 00:02:49.443 [694/710] Linking target app/dpdk-test-gpudev 00:02:49.443 [695/710] Linking target app/dpdk-test-acl 00:02:49.443 [696/710] Linking target app/dpdk-test-mldev 00:02:49.443 [697/710] Linking target app/dpdk-test-sad 00:02:49.443 [698/710] Linking target app/dpdk-test-regex 00:02:49.443 [699/710] Linking target app/dpdk-graph 00:02:49.443 [700/710] Linking target app/dpdk-test-fib 00:02:49.443 [701/710] Linking target app/dpdk-test-pipeline 00:02:49.443 [702/710] Linking target app/dpdk-test-dma-perf 00:02:49.443 [703/710] Linking target app/dpdk-test-crypto-perf 00:02:49.443 [704/710] Linking target app/dpdk-test-compress-perf 00:02:49.443 [705/710] Linking target app/dpdk-test-security-perf 00:02:49.443 [706/710] Linking target app/dpdk-test-bbdev 00:02:49.443 [707/710] Linking target app/dpdk-test-eventdev 00:02:49.443 [708/710] Linking target app/dpdk-testpmd 00:02:51.346 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.346 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:51.346 03:35:10 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:51.346 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:51.346 [0/1] Installing files. 00:02:51.607 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:51.607 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.607 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.608 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.609 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.873 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:51.874 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:51.875 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.876 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:51.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:51.877 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.877 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.877 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.877 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.877 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.877 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.877 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.877 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.878 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.879 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.453 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.453 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.453 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.453 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.453 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.453 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:52.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:52.457 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:52.457 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:52.457 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:52.457 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:52.457 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:52.457 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:52.457 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:52.457 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:52.457 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:52.457 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:52.457 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:52.457 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:52.457 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:52.457 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:52.457 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:52.457 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:52.457 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:52.457 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:52.457 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:52.458 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:52.458 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:52.458 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:52.458 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:52.458 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:52.458 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:52.458 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:52.458 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:52.458 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:52.458 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:52.458 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:52.458 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:52.458 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:52.458 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:52.458 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:52.458 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:52.458 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:52.458 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:52.458 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:52.458 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:52.458 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:52.458 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:52.458 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:52.458 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:52.458 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:52.458 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:52.458 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:52.458 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:52.458 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:52.458 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:52.458 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:52.458 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:52.458 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:52.458 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:52.458 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:52.458 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:52.458 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:52.458 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:52.458 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:52.458 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:52.458 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:52.458 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:52.458 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:52.458 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:52.458 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:52.458 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:52.458 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:52.458 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:52.458 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:52.458 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:52.458 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:52.458 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:52.458 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:52.458 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:52.458 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:52.458 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:52.458 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:52.458 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:52.458 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:52.458 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:52.458 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:52.458 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:52.458 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:52.458 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:52.458 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:52.458 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:52.458 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:52.458 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:52.458 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:52.458 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:52.458 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:52.458 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:52.458 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:52.458 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:52.458 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:52.458 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:52.458 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:52.458 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:52.458 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:52.458 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:52.458 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:52.458 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:52.458 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:52.458 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:52.458 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:52.458 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:52.458 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:52.458 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:52.458 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:52.458 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:52.458 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:52.458 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:52.458 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:52.458 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:52.458 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:52.458 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:52.458 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:52.458 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:52.458 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:52.458 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:52.458 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:52.459 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:52.459 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:52.459 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:52.459 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:52.459 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:52.459 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:52.459 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:52.459 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:52.459 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:52.459 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:52.459 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:52.459 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:52.459 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:52.459 03:35:11 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:52.459 03:35:11 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:52.459 03:35:11 -- common/autobuild_common.sh@200 -- $ cat 00:02:52.459 03:35:11 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.459 00:02:52.459 real 1m24.488s 00:02:52.459 user 18m0.512s 00:02:52.459 sys 2m6.364s 00:02:52.459 03:35:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:52.459 03:35:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.459 ************************************ 00:02:52.459 END TEST build_native_dpdk 00:02:52.459 ************************************ 00:02:52.459 03:35:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:52.459 03:35:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:52.459 03:35:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:52.459 03:35:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:52.459 03:35:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:52.459 03:35:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:52.459 03:35:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:52.459 03:35:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:52.718 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:52.718 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.718 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:52.718 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:52.978 Using 'verbs' RDMA provider 00:03:03.523 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:11.677 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:11.936 Creating mk/config.mk...done. 00:03:11.936 Creating mk/cc.flags.mk...done. 00:03:11.936 Type 'make' to build. 00:03:11.936 03:35:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:11.936 03:35:30 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:11.936 03:35:30 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:11.936 03:35:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.936 ************************************ 00:03:11.936 START TEST make 00:03:11.936 ************************************ 00:03:11.936 03:35:30 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:11.936 make[1]: Nothing to be done for 'all'. 00:03:13.851 The Meson build system 00:03:13.851 Version: 1.3.1 00:03:13.851 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:13.851 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.851 Build type: native build 00:03:13.851 Project name: libvfio-user 00:03:13.851 Project version: 0.0.1 00:03:13.851 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:13.851 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:13.851 Host machine cpu family: x86_64 00:03:13.851 Host machine cpu: x86_64 00:03:13.851 Run-time dependency threads found: YES 00:03:13.851 Library dl found: YES 00:03:13.851 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:13.851 Run-time dependency json-c found: YES 0.17 00:03:13.851 Run-time dependency cmocka found: YES 1.1.7 00:03:13.851 Program pytest-3 found: NO 00:03:13.851 Program flake8 found: NO 00:03:13.851 Program misspell-fixer found: NO 00:03:13.851 Program restructuredtext-lint found: NO 00:03:13.851 Program valgrind found: YES (/usr/bin/valgrind) 00:03:13.851 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:13.851 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:13.851 Compiler for C supports arguments -Wwrite-strings: YES 00:03:13.851 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.851 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:13.851 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:13.851 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.851 Build targets in project: 8 00:03:13.851 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:13.851 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:13.851 00:03:13.851 libvfio-user 0.0.1 00:03:13.851 00:03:13.852 User defined options 00:03:13.852 buildtype : debug 00:03:13.852 default_library: shared 00:03:13.852 libdir : /usr/local/lib 00:03:13.852 00:03:13.852 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.426 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.426 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:14.686 [2/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:14.686 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:14.686 [4/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:14.686 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:14.686 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:14.686 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:14.686 [8/37] Compiling C object samples/null.p/null.c.o 00:03:14.686 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:14.686 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:14.686 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:14.686 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:14.686 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:14.686 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:14.686 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:14.686 [16/37] Compiling C object samples/server.p/server.c.o 00:03:14.686 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:14.686 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:14.686 [19/37] Compiling C object samples/client.p/client.c.o 00:03:14.686 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:14.686 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:14.686 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:14.686 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:14.686 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:14.686 [25/37] Linking target samples/client 00:03:14.686 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:14.686 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:14.686 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:14.686 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:14.950 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:14.950 [31/37] Linking target test/unit_tests 00:03:14.950 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:15.214 [33/37] Linking target samples/null 00:03:15.214 [34/37] Linking target samples/server 00:03:15.214 [35/37] Linking target samples/lspci 00:03:15.214 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:15.214 [37/37] Linking target samples/gpio-pci-idio-16 00:03:15.214 INFO: autodetecting backend as ninja 00:03:15.214 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.214 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.787 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:15.787 ninja: no work to do. 00:03:27.985 CC lib/ut/ut.o 00:03:27.985 CC lib/log/log.o 00:03:27.985 CC lib/log/log_flags.o 00:03:27.985 CC lib/log/log_deprecated.o 00:03:27.985 CC lib/ut_mock/mock.o 00:03:27.985 LIB libspdk_ut_mock.a 00:03:27.985 SO libspdk_ut_mock.so.5.0 00:03:27.985 LIB libspdk_ut.a 00:03:27.985 LIB libspdk_log.a 00:03:27.985 SO libspdk_ut.so.1.0 00:03:27.985 SO libspdk_log.so.6.1 00:03:27.985 SYMLINK libspdk_ut_mock.so 00:03:27.985 SYMLINK libspdk_ut.so 00:03:27.985 SYMLINK libspdk_log.so 00:03:27.985 CC lib/ioat/ioat.o 00:03:27.985 CC lib/dma/dma.o 00:03:27.985 CXX lib/trace_parser/trace.o 00:03:27.985 CC lib/util/base64.o 00:03:27.985 CC lib/util/bit_array.o 00:03:27.985 CC lib/util/cpuset.o 00:03:27.985 CC lib/util/crc16.o 00:03:27.985 CC lib/util/crc32.o 00:03:27.985 CC lib/util/crc32c.o 00:03:27.985 CC lib/util/crc32_ieee.o 00:03:27.985 CC lib/util/crc64.o 00:03:27.985 CC lib/util/dif.o 00:03:27.985 CC lib/util/fd.o 00:03:27.985 CC lib/util/file.o 00:03:27.985 CC lib/util/hexlify.o 00:03:27.985 CC lib/util/iov.o 00:03:27.985 CC lib/util/math.o 00:03:27.985 CC lib/util/pipe.o 00:03:27.985 CC lib/util/strerror_tls.o 00:03:27.985 CC lib/util/string.o 00:03:27.985 CC lib/util/uuid.o 00:03:27.985 CC lib/util/fd_group.o 00:03:27.985 CC lib/util/xor.o 00:03:27.985 CC lib/util/zipf.o 00:03:27.985 CC lib/vfio_user/host/vfio_user_pci.o 00:03:27.985 CC lib/vfio_user/host/vfio_user.o 00:03:27.985 LIB libspdk_dma.a 00:03:27.985 SO libspdk_dma.so.3.0 00:03:27.985 SYMLINK libspdk_dma.so 00:03:27.985 LIB libspdk_ioat.a 00:03:27.985 SO libspdk_ioat.so.6.0 00:03:27.985 SYMLINK libspdk_ioat.so 00:03:27.985 LIB libspdk_vfio_user.a 00:03:27.985 SO libspdk_vfio_user.so.4.0 00:03:27.985 SYMLINK libspdk_vfio_user.so 00:03:27.985 LIB libspdk_util.a 00:03:27.985 SO libspdk_util.so.8.0 00:03:28.244 SYMLINK libspdk_util.so 00:03:28.244 CC lib/vmd/vmd.o 00:03:28.244 CC lib/env_dpdk/env.o 00:03:28.244 CC lib/conf/conf.o 00:03:28.244 CC lib/idxd/idxd.o 00:03:28.244 CC lib/rdma/common.o 00:03:28.244 CC lib/env_dpdk/memory.o 00:03:28.244 CC lib/vmd/led.o 00:03:28.244 CC lib/idxd/idxd_user.o 00:03:28.244 CC lib/json/json_parse.o 00:03:28.244 CC lib/rdma/rdma_verbs.o 00:03:28.244 CC lib/env_dpdk/pci.o 00:03:28.244 CC lib/idxd/idxd_kernel.o 00:03:28.244 CC lib/json/json_util.o 00:03:28.244 CC lib/env_dpdk/init.o 00:03:28.244 CC lib/json/json_write.o 00:03:28.244 CC lib/env_dpdk/threads.o 00:03:28.244 CC lib/env_dpdk/pci_vmd.o 00:03:28.244 CC lib/env_dpdk/pci_ioat.o 00:03:28.244 CC lib/env_dpdk/pci_virtio.o 00:03:28.244 CC lib/env_dpdk/pci_idxd.o 00:03:28.244 CC lib/env_dpdk/pci_event.o 00:03:28.244 CC lib/env_dpdk/sigbus_handler.o 00:03:28.244 CC lib/env_dpdk/pci_dpdk.o 00:03:28.244 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:28.244 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:28.244 LIB libspdk_trace_parser.a 00:03:28.244 SO libspdk_trace_parser.so.4.0 00:03:28.502 SYMLINK libspdk_trace_parser.so 00:03:28.502 LIB libspdk_conf.a 00:03:28.502 SO libspdk_conf.so.5.0 00:03:28.502 LIB libspdk_rdma.a 00:03:28.502 LIB libspdk_json.a 00:03:28.502 SO libspdk_rdma.so.5.0 00:03:28.502 SYMLINK libspdk_conf.so 00:03:28.502 SO libspdk_json.so.5.1 00:03:28.761 SYMLINK libspdk_rdma.so 00:03:28.761 SYMLINK libspdk_json.so 00:03:28.761 CC lib/jsonrpc/jsonrpc_server.o 00:03:28.761 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:28.761 CC lib/jsonrpc/jsonrpc_client.o 00:03:28.761 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:28.761 LIB libspdk_idxd.a 00:03:28.761 SO libspdk_idxd.so.11.0 00:03:29.019 SYMLINK libspdk_idxd.so 00:03:29.019 LIB libspdk_vmd.a 00:03:29.019 SO libspdk_vmd.so.5.0 00:03:29.019 SYMLINK libspdk_vmd.so 00:03:29.019 LIB libspdk_jsonrpc.a 00:03:29.019 SO libspdk_jsonrpc.so.5.1 00:03:29.276 SYMLINK libspdk_jsonrpc.so 00:03:29.276 CC lib/rpc/rpc.o 00:03:29.533 LIB libspdk_rpc.a 00:03:29.533 SO libspdk_rpc.so.5.0 00:03:29.533 SYMLINK libspdk_rpc.so 00:03:29.533 CC lib/trace/trace.o 00:03:29.533 CC lib/trace/trace_flags.o 00:03:29.533 CC lib/trace/trace_rpc.o 00:03:29.533 CC lib/sock/sock.o 00:03:29.533 CC lib/sock/sock_rpc.o 00:03:29.533 CC lib/notify/notify.o 00:03:29.533 CC lib/notify/notify_rpc.o 00:03:29.791 LIB libspdk_notify.a 00:03:29.791 SO libspdk_notify.so.5.0 00:03:29.791 LIB libspdk_trace.a 00:03:29.791 SYMLINK libspdk_notify.so 00:03:29.791 SO libspdk_trace.so.9.0 00:03:30.048 SYMLINK libspdk_trace.so 00:03:30.048 LIB libspdk_sock.a 00:03:30.048 SO libspdk_sock.so.8.0 00:03:30.048 CC lib/thread/thread.o 00:03:30.048 CC lib/thread/iobuf.o 00:03:30.048 SYMLINK libspdk_sock.so 00:03:30.307 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.307 CC lib/nvme/nvme_ctrlr.o 00:03:30.307 CC lib/nvme/nvme_fabric.o 00:03:30.307 CC lib/nvme/nvme_ns_cmd.o 00:03:30.307 CC lib/nvme/nvme_ns.o 00:03:30.307 CC lib/nvme/nvme_pcie_common.o 00:03:30.307 CC lib/nvme/nvme_pcie.o 00:03:30.307 CC lib/nvme/nvme_qpair.o 00:03:30.307 CC lib/nvme/nvme.o 00:03:30.307 CC lib/nvme/nvme_quirks.o 00:03:30.307 CC lib/nvme/nvme_transport.o 00:03:30.307 CC lib/nvme/nvme_discovery.o 00:03:30.307 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.307 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.307 CC lib/nvme/nvme_tcp.o 00:03:30.307 CC lib/nvme/nvme_opal.o 00:03:30.307 CC lib/nvme/nvme_io_msg.o 00:03:30.307 CC lib/nvme/nvme_poll_group.o 00:03:30.307 CC lib/nvme/nvme_cuse.o 00:03:30.307 CC lib/nvme/nvme_zns.o 00:03:30.307 CC lib/nvme/nvme_vfio_user.o 00:03:30.307 CC lib/nvme/nvme_rdma.o 00:03:30.307 LIB libspdk_env_dpdk.a 00:03:30.307 SO libspdk_env_dpdk.so.13.0 00:03:30.565 SYMLINK libspdk_env_dpdk.so 00:03:31.936 LIB libspdk_thread.a 00:03:31.936 SO libspdk_thread.so.9.0 00:03:31.936 SYMLINK libspdk_thread.so 00:03:31.936 CC lib/init/json_config.o 00:03:31.936 CC lib/virtio/virtio.o 00:03:31.936 CC lib/vfu_tgt/tgt_endpoint.o 00:03:31.936 CC lib/init/subsystem.o 00:03:31.936 CC lib/accel/accel.o 00:03:31.936 CC lib/blob/blobstore.o 00:03:31.936 CC lib/virtio/virtio_vhost_user.o 00:03:31.936 CC lib/accel/accel_rpc.o 00:03:31.936 CC lib/vfu_tgt/tgt_rpc.o 00:03:31.936 CC lib/init/subsystem_rpc.o 00:03:31.936 CC lib/virtio/virtio_vfio_user.o 00:03:31.936 CC lib/blob/request.o 00:03:31.936 CC lib/accel/accel_sw.o 00:03:31.936 CC lib/init/rpc.o 00:03:31.936 CC lib/blob/zeroes.o 00:03:31.936 CC lib/virtio/virtio_pci.o 00:03:31.936 CC lib/blob/blob_bs_dev.o 00:03:32.193 LIB libspdk_init.a 00:03:32.193 SO libspdk_init.so.4.0 00:03:32.193 LIB libspdk_virtio.a 00:03:32.193 SYMLINK libspdk_init.so 00:03:32.193 LIB libspdk_vfu_tgt.a 00:03:32.193 SO libspdk_virtio.so.6.0 00:03:32.193 SO libspdk_vfu_tgt.so.2.0 00:03:32.193 SYMLINK libspdk_vfu_tgt.so 00:03:32.193 SYMLINK libspdk_virtio.so 00:03:32.193 CC lib/event/app.o 00:03:32.193 CC lib/event/reactor.o 00:03:32.193 CC lib/event/log_rpc.o 00:03:32.193 CC lib/event/app_rpc.o 00:03:32.193 CC lib/event/scheduler_static.o 00:03:32.449 LIB libspdk_nvme.a 00:03:32.714 SO libspdk_nvme.so.12.0 00:03:32.714 LIB libspdk_event.a 00:03:32.714 SO libspdk_event.so.12.0 00:03:32.714 SYMLINK libspdk_event.so 00:03:32.971 SYMLINK libspdk_nvme.so 00:03:32.971 LIB libspdk_accel.a 00:03:32.971 SO libspdk_accel.so.14.0 00:03:32.971 SYMLINK libspdk_accel.so 00:03:33.230 CC lib/bdev/bdev.o 00:03:33.230 CC lib/bdev/bdev_rpc.o 00:03:33.230 CC lib/bdev/bdev_zone.o 00:03:33.230 CC lib/bdev/part.o 00:03:33.230 CC lib/bdev/scsi_nvme.o 00:03:34.626 LIB libspdk_blob.a 00:03:34.626 SO libspdk_blob.so.10.1 00:03:34.626 SYMLINK libspdk_blob.so 00:03:34.884 CC lib/lvol/lvol.o 00:03:34.884 CC lib/blobfs/blobfs.o 00:03:34.884 CC lib/blobfs/tree.o 00:03:35.449 LIB libspdk_blobfs.a 00:03:35.449 LIB libspdk_bdev.a 00:03:35.713 SO libspdk_blobfs.so.9.0 00:03:35.713 SO libspdk_bdev.so.14.0 00:03:35.713 LIB libspdk_lvol.a 00:03:35.713 SYMLINK libspdk_blobfs.so 00:03:35.713 SO libspdk_lvol.so.9.1 00:03:35.713 SYMLINK libspdk_bdev.so 00:03:35.713 SYMLINK libspdk_lvol.so 00:03:35.713 CC lib/nvmf/ctrlr.o 00:03:35.713 CC lib/nvmf/ctrlr_discovery.o 00:03:35.713 CC lib/nbd/nbd.o 00:03:35.713 CC lib/ublk/ublk.o 00:03:35.713 CC lib/scsi/dev.o 00:03:35.713 CC lib/ftl/ftl_core.o 00:03:35.713 CC lib/nvmf/ctrlr_bdev.o 00:03:35.713 CC lib/ublk/ublk_rpc.o 00:03:35.713 CC lib/scsi/lun.o 00:03:35.713 CC lib/nbd/nbd_rpc.o 00:03:35.713 CC lib/ftl/ftl_init.o 00:03:35.713 CC lib/nvmf/subsystem.o 00:03:35.713 CC lib/scsi/port.o 00:03:35.713 CC lib/ftl/ftl_layout.o 00:03:35.713 CC lib/scsi/scsi.o 00:03:35.713 CC lib/nvmf/nvmf_rpc.o 00:03:35.713 CC lib/nvmf/nvmf.o 00:03:35.713 CC lib/ftl/ftl_debug.o 00:03:35.713 CC lib/scsi/scsi_bdev.o 00:03:35.713 CC lib/nvmf/transport.o 00:03:35.713 CC lib/scsi/scsi_pr.o 00:03:35.713 CC lib/ftl/ftl_io.o 00:03:35.713 CC lib/ftl/ftl_sb.o 00:03:35.713 CC lib/nvmf/tcp.o 00:03:35.713 CC lib/scsi/scsi_rpc.o 00:03:35.713 CC lib/nvmf/vfio_user.o 00:03:35.713 CC lib/ftl/ftl_l2p.o 00:03:35.713 CC lib/scsi/task.o 00:03:35.713 CC lib/ftl/ftl_l2p_flat.o 00:03:35.713 CC lib/nvmf/rdma.o 00:03:35.713 CC lib/ftl/ftl_nv_cache.o 00:03:35.713 CC lib/ftl/ftl_band.o 00:03:35.713 CC lib/ftl/ftl_band_ops.o 00:03:35.713 CC lib/ftl/ftl_writer.o 00:03:35.713 CC lib/ftl/ftl_rq.o 00:03:35.713 CC lib/ftl/ftl_reloc.o 00:03:35.713 CC lib/ftl/ftl_l2p_cache.o 00:03:35.713 CC lib/ftl/ftl_p2l.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:35.713 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.283 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.283 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.283 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.283 CC lib/ftl/utils/ftl_conf.o 00:03:36.283 CC lib/ftl/utils/ftl_md.o 00:03:36.283 CC lib/ftl/utils/ftl_mempool.o 00:03:36.283 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.283 CC lib/ftl/utils/ftl_property.o 00:03:36.283 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.283 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:36.283 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:36.283 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:36.283 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:36.283 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:36.283 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:36.283 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:36.283 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.283 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.283 CC lib/ftl/base/ftl_base_dev.o 00:03:36.283 CC lib/ftl/base/ftl_base_bdev.o 00:03:36.283 CC lib/ftl/ftl_trace.o 00:03:36.542 LIB libspdk_nbd.a 00:03:36.542 SO libspdk_nbd.so.6.0 00:03:36.542 SYMLINK libspdk_nbd.so 00:03:36.800 LIB libspdk_scsi.a 00:03:36.800 SO libspdk_scsi.so.8.0 00:03:36.800 SYMLINK libspdk_scsi.so 00:03:36.800 LIB libspdk_ublk.a 00:03:36.800 SO libspdk_ublk.so.2.0 00:03:36.800 SYMLINK libspdk_ublk.so 00:03:36.800 CC lib/vhost/vhost.o 00:03:36.800 CC lib/iscsi/conn.o 00:03:36.800 CC lib/vhost/vhost_rpc.o 00:03:36.800 CC lib/iscsi/init_grp.o 00:03:36.800 CC lib/vhost/vhost_scsi.o 00:03:37.058 CC lib/vhost/vhost_blk.o 00:03:37.058 CC lib/iscsi/iscsi.o 00:03:37.058 CC lib/iscsi/md5.o 00:03:37.058 CC lib/vhost/rte_vhost_user.o 00:03:37.058 CC lib/iscsi/param.o 00:03:37.058 CC lib/iscsi/portal_grp.o 00:03:37.058 CC lib/iscsi/tgt_node.o 00:03:37.058 CC lib/iscsi/iscsi_subsystem.o 00:03:37.058 CC lib/iscsi/iscsi_rpc.o 00:03:37.058 CC lib/iscsi/task.o 00:03:37.058 LIB libspdk_ftl.a 00:03:37.316 SO libspdk_ftl.so.8.0 00:03:37.574 SYMLINK libspdk_ftl.so 00:03:38.140 LIB libspdk_vhost.a 00:03:38.140 SO libspdk_vhost.so.7.1 00:03:38.140 SYMLINK libspdk_vhost.so 00:03:38.399 LIB libspdk_nvmf.a 00:03:38.399 LIB libspdk_iscsi.a 00:03:38.399 SO libspdk_nvmf.so.17.0 00:03:38.399 SO libspdk_iscsi.so.7.0 00:03:38.656 SYMLINK libspdk_nvmf.so 00:03:38.656 SYMLINK libspdk_iscsi.so 00:03:38.656 CC module/vfu_device/vfu_virtio.o 00:03:38.656 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.656 CC module/vfu_device/vfu_virtio_blk.o 00:03:38.656 CC module/vfu_device/vfu_virtio_scsi.o 00:03:38.656 CC module/vfu_device/vfu_virtio_rpc.o 00:03:38.914 CC module/blob/bdev/blob_bdev.o 00:03:38.914 CC module/accel/error/accel_error.o 00:03:38.914 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.914 CC module/accel/error/accel_error_rpc.o 00:03:38.914 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.914 CC module/sock/posix/posix.o 00:03:38.914 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.914 CC module/accel/ioat/accel_ioat.o 00:03:38.914 CC module/accel/dsa/accel_dsa.o 00:03:38.914 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.914 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.914 CC module/accel/iaa/accel_iaa.o 00:03:38.914 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.914 LIB libspdk_env_dpdk_rpc.a 00:03:38.914 SO libspdk_env_dpdk_rpc.so.5.0 00:03:38.914 SYMLINK libspdk_env_dpdk_rpc.so 00:03:38.914 LIB libspdk_scheduler_gscheduler.a 00:03:38.914 LIB libspdk_scheduler_dpdk_governor.a 00:03:38.914 SO libspdk_scheduler_gscheduler.so.3.0 00:03:38.914 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:38.914 LIB libspdk_accel_error.a 00:03:38.914 LIB libspdk_accel_ioat.a 00:03:39.172 LIB libspdk_scheduler_dynamic.a 00:03:39.172 SO libspdk_accel_error.so.1.0 00:03:39.172 LIB libspdk_accel_iaa.a 00:03:39.172 SO libspdk_accel_ioat.so.5.0 00:03:39.172 SO libspdk_scheduler_dynamic.so.3.0 00:03:39.172 SYMLINK libspdk_scheduler_gscheduler.so 00:03:39.172 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:39.172 SO libspdk_accel_iaa.so.2.0 00:03:39.172 LIB libspdk_accel_dsa.a 00:03:39.172 SYMLINK libspdk_accel_error.so 00:03:39.172 LIB libspdk_blob_bdev.a 00:03:39.172 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.172 SYMLINK libspdk_accel_ioat.so 00:03:39.172 SO libspdk_accel_dsa.so.4.0 00:03:39.172 SO libspdk_blob_bdev.so.10.1 00:03:39.172 SYMLINK libspdk_accel_iaa.so 00:03:39.172 SYMLINK libspdk_accel_dsa.so 00:03:39.172 SYMLINK libspdk_blob_bdev.so 00:03:39.436 CC module/bdev/error/vbdev_error.o 00:03:39.436 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.436 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.436 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.436 CC module/bdev/raid/bdev_raid.o 00:03:39.436 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.436 CC module/bdev/delay/vbdev_delay.o 00:03:39.436 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.436 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.436 CC module/bdev/null/bdev_null.o 00:03:39.436 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.436 CC module/bdev/aio/bdev_aio.o 00:03:39.436 CC module/bdev/ftl/bdev_ftl.o 00:03:39.436 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.436 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.436 CC module/bdev/nvme/bdev_nvme.o 00:03:39.436 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.436 CC module/bdev/split/vbdev_split.o 00:03:39.436 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.436 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.436 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.436 CC module/bdev/null/bdev_null_rpc.o 00:03:39.436 CC module/bdev/raid/raid0.o 00:03:39.436 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:39.436 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.436 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.436 CC module/bdev/malloc/bdev_malloc.o 00:03:39.436 CC module/bdev/nvme/nvme_rpc.o 00:03:39.436 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.436 CC module/bdev/gpt/gpt.o 00:03:39.436 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.436 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.436 CC module/bdev/raid/raid1.o 00:03:39.436 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.436 CC module/bdev/raid/concat.o 00:03:39.436 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.436 CC module/bdev/nvme/vbdev_opal.o 00:03:39.436 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.436 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.436 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.436 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.436 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.436 LIB libspdk_vfu_device.a 00:03:39.694 SO libspdk_vfu_device.so.2.0 00:03:39.694 LIB libspdk_sock_posix.a 00:03:39.694 SYMLINK libspdk_vfu_device.so 00:03:39.694 SO libspdk_sock_posix.so.5.0 00:03:39.694 LIB libspdk_bdev_passthru.a 00:03:39.694 LIB libspdk_blobfs_bdev.a 00:03:39.694 SO libspdk_bdev_passthru.so.5.0 00:03:39.694 SYMLINK libspdk_sock_posix.so 00:03:39.694 SO libspdk_blobfs_bdev.so.5.0 00:03:39.694 LIB libspdk_bdev_split.a 00:03:39.952 SYMLINK libspdk_bdev_passthru.so 00:03:39.952 SYMLINK libspdk_blobfs_bdev.so 00:03:39.952 SO libspdk_bdev_split.so.5.0 00:03:39.952 LIB libspdk_bdev_null.a 00:03:39.952 LIB libspdk_bdev_gpt.a 00:03:39.952 SO libspdk_bdev_null.so.5.0 00:03:39.952 LIB libspdk_bdev_error.a 00:03:39.952 SYMLINK libspdk_bdev_split.so 00:03:39.952 SO libspdk_bdev_gpt.so.5.0 00:03:39.952 LIB libspdk_bdev_ftl.a 00:03:39.952 SO libspdk_bdev_error.so.5.0 00:03:39.952 LIB libspdk_bdev_malloc.a 00:03:39.952 LIB libspdk_bdev_aio.a 00:03:39.952 LIB libspdk_bdev_zone_block.a 00:03:39.952 SO libspdk_bdev_ftl.so.5.0 00:03:39.952 LIB libspdk_bdev_iscsi.a 00:03:39.952 SYMLINK libspdk_bdev_null.so 00:03:39.952 SO libspdk_bdev_malloc.so.5.0 00:03:39.952 SO libspdk_bdev_aio.so.5.0 00:03:39.952 LIB libspdk_bdev_delay.a 00:03:39.952 SYMLINK libspdk_bdev_gpt.so 00:03:39.952 SO libspdk_bdev_iscsi.so.5.0 00:03:39.952 SO libspdk_bdev_zone_block.so.5.0 00:03:39.952 SYMLINK libspdk_bdev_error.so 00:03:39.952 SO libspdk_bdev_delay.so.5.0 00:03:39.952 SYMLINK libspdk_bdev_ftl.so 00:03:39.952 SYMLINK libspdk_bdev_malloc.so 00:03:39.952 SYMLINK libspdk_bdev_aio.so 00:03:39.952 SYMLINK libspdk_bdev_zone_block.so 00:03:39.952 SYMLINK libspdk_bdev_iscsi.so 00:03:39.952 LIB libspdk_bdev_lvol.a 00:03:39.952 SYMLINK libspdk_bdev_delay.so 00:03:39.952 SO libspdk_bdev_lvol.so.5.0 00:03:40.210 LIB libspdk_bdev_virtio.a 00:03:40.210 SYMLINK libspdk_bdev_lvol.so 00:03:40.210 SO libspdk_bdev_virtio.so.5.0 00:03:40.210 SYMLINK libspdk_bdev_virtio.so 00:03:40.468 LIB libspdk_bdev_raid.a 00:03:40.468 SO libspdk_bdev_raid.so.5.0 00:03:40.468 SYMLINK libspdk_bdev_raid.so 00:03:41.840 LIB libspdk_bdev_nvme.a 00:03:41.840 SO libspdk_bdev_nvme.so.6.0 00:03:41.840 SYMLINK libspdk_bdev_nvme.so 00:03:42.097 CC module/event/subsystems/sock/sock.o 00:03:42.097 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:42.097 CC module/event/subsystems/vmd/vmd.o 00:03:42.097 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.097 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.097 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.097 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.097 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.355 LIB libspdk_event_sock.a 00:03:42.355 LIB libspdk_event_vhost_blk.a 00:03:42.355 LIB libspdk_event_scheduler.a 00:03:42.355 LIB libspdk_event_vfu_tgt.a 00:03:42.355 LIB libspdk_event_vmd.a 00:03:42.355 LIB libspdk_event_iobuf.a 00:03:42.355 SO libspdk_event_sock.so.4.0 00:03:42.355 SO libspdk_event_vhost_blk.so.2.0 00:03:42.355 SO libspdk_event_scheduler.so.3.0 00:03:42.355 SO libspdk_event_vfu_tgt.so.2.0 00:03:42.355 SO libspdk_event_vmd.so.5.0 00:03:42.355 SO libspdk_event_iobuf.so.2.0 00:03:42.355 SYMLINK libspdk_event_sock.so 00:03:42.355 SYMLINK libspdk_event_vhost_blk.so 00:03:42.355 SYMLINK libspdk_event_scheduler.so 00:03:42.355 SYMLINK libspdk_event_vfu_tgt.so 00:03:42.355 SYMLINK libspdk_event_vmd.so 00:03:42.355 SYMLINK libspdk_event_iobuf.so 00:03:42.355 CC module/event/subsystems/accel/accel.o 00:03:42.613 LIB libspdk_event_accel.a 00:03:42.613 SO libspdk_event_accel.so.5.0 00:03:42.613 SYMLINK libspdk_event_accel.so 00:03:42.870 CC module/event/subsystems/bdev/bdev.o 00:03:42.870 LIB libspdk_event_bdev.a 00:03:42.870 SO libspdk_event_bdev.so.5.0 00:03:43.128 SYMLINK libspdk_event_bdev.so 00:03:43.128 CC module/event/subsystems/scsi/scsi.o 00:03:43.128 CC module/event/subsystems/ublk/ublk.o 00:03:43.128 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.128 CC module/event/subsystems/nbd/nbd.o 00:03:43.128 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.385 LIB libspdk_event_nbd.a 00:03:43.385 LIB libspdk_event_ublk.a 00:03:43.385 LIB libspdk_event_scsi.a 00:03:43.385 SO libspdk_event_nbd.so.5.0 00:03:43.385 SO libspdk_event_ublk.so.2.0 00:03:43.385 SO libspdk_event_scsi.so.5.0 00:03:43.385 SYMLINK libspdk_event_nbd.so 00:03:43.385 SYMLINK libspdk_event_ublk.so 00:03:43.385 LIB libspdk_event_nvmf.a 00:03:43.385 SYMLINK libspdk_event_scsi.so 00:03:43.385 SO libspdk_event_nvmf.so.5.0 00:03:43.385 SYMLINK libspdk_event_nvmf.so 00:03:43.385 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.385 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.643 LIB libspdk_event_vhost_scsi.a 00:03:43.643 LIB libspdk_event_iscsi.a 00:03:43.643 SO libspdk_event_vhost_scsi.so.2.0 00:03:43.643 SO libspdk_event_iscsi.so.5.0 00:03:43.643 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.643 SYMLINK libspdk_event_iscsi.so 00:03:43.905 SO libspdk.so.5.0 00:03:43.905 SYMLINK libspdk.so 00:03:43.905 CC app/spdk_nvme_discover/discovery_aer.o 00:03:43.905 CXX app/trace/trace.o 00:03:43.905 CC app/trace_record/trace_record.o 00:03:43.905 CC app/spdk_top/spdk_top.o 00:03:43.905 CC app/spdk_nvme_identify/identify.o 00:03:43.905 CC app/spdk_nvme_perf/perf.o 00:03:43.905 CC test/rpc_client/rpc_client_test.o 00:03:43.905 TEST_HEADER include/spdk/accel.h 00:03:43.905 TEST_HEADER include/spdk/accel_module.h 00:03:43.905 CC app/spdk_lspci/spdk_lspci.o 00:03:43.905 TEST_HEADER include/spdk/assert.h 00:03:43.905 TEST_HEADER include/spdk/barrier.h 00:03:43.905 TEST_HEADER include/spdk/base64.h 00:03:43.905 TEST_HEADER include/spdk/bdev.h 00:03:43.905 TEST_HEADER include/spdk/bdev_module.h 00:03:43.905 TEST_HEADER include/spdk/bdev_zone.h 00:03:43.905 TEST_HEADER include/spdk/bit_array.h 00:03:43.905 TEST_HEADER include/spdk/bit_pool.h 00:03:43.905 TEST_HEADER include/spdk/blob_bdev.h 00:03:43.905 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:43.905 TEST_HEADER include/spdk/blobfs.h 00:03:44.170 TEST_HEADER include/spdk/blob.h 00:03:44.170 TEST_HEADER include/spdk/conf.h 00:03:44.170 TEST_HEADER include/spdk/config.h 00:03:44.170 CC app/spdk_dd/spdk_dd.o 00:03:44.170 TEST_HEADER include/spdk/cpuset.h 00:03:44.170 TEST_HEADER include/spdk/crc16.h 00:03:44.170 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.170 TEST_HEADER include/spdk/crc32.h 00:03:44.170 TEST_HEADER include/spdk/crc64.h 00:03:44.170 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.170 TEST_HEADER include/spdk/dif.h 00:03:44.170 TEST_HEADER include/spdk/dma.h 00:03:44.170 CC app/nvmf_tgt/nvmf_main.o 00:03:44.170 CC app/vhost/vhost.o 00:03:44.170 CC test/app/jsoncat/jsoncat.o 00:03:44.170 TEST_HEADER include/spdk/endian.h 00:03:44.170 CC examples/nvme/reconnect/reconnect.o 00:03:44.170 CC examples/ioat/verify/verify.o 00:03:44.170 CC test/thread/poller_perf/poller_perf.o 00:03:44.170 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:44.170 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.171 CC test/app/histogram_perf/histogram_perf.o 00:03:44.171 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:44.171 CC examples/nvme/hotplug/hotplug.o 00:03:44.171 CC test/app/stub/stub.o 00:03:44.171 TEST_HEADER include/spdk/env.h 00:03:44.171 CC examples/nvme/hello_world/hello_world.o 00:03:44.171 CC examples/ioat/perf/perf.o 00:03:44.171 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.171 TEST_HEADER include/spdk/event.h 00:03:44.171 TEST_HEADER include/spdk/fd_group.h 00:03:44.171 CC test/nvme/aer/aer.o 00:03:44.171 CC examples/nvme/abort/abort.o 00:03:44.171 CC examples/nvme/arbitration/arbitration.o 00:03:44.171 TEST_HEADER include/spdk/fd.h 00:03:44.171 CC examples/accel/perf/accel_perf.o 00:03:44.171 CC examples/idxd/perf/perf.o 00:03:44.171 TEST_HEADER include/spdk/file.h 00:03:44.171 CC examples/util/zipf/zipf.o 00:03:44.171 CC examples/sock/hello_world/hello_sock.o 00:03:44.171 CC test/event/event_perf/event_perf.o 00:03:44.171 TEST_HEADER include/spdk/ftl.h 00:03:44.171 CC app/fio/nvme/fio_plugin.o 00:03:44.171 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.171 TEST_HEADER include/spdk/hexlify.h 00:03:44.171 TEST_HEADER include/spdk/histogram_data.h 00:03:44.171 TEST_HEADER include/spdk/idxd.h 00:03:44.171 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.171 CC app/spdk_tgt/spdk_tgt.o 00:03:44.171 TEST_HEADER include/spdk/init.h 00:03:44.171 TEST_HEADER include/spdk/ioat.h 00:03:44.171 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.171 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.171 TEST_HEADER include/spdk/json.h 00:03:44.171 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.171 CC examples/bdev/hello_world/hello_bdev.o 00:03:44.171 TEST_HEADER include/spdk/likely.h 00:03:44.171 CC test/app/bdev_svc/bdev_svc.o 00:03:44.171 TEST_HEADER include/spdk/log.h 00:03:44.171 CC test/accel/dif/dif.o 00:03:44.171 CC examples/blob/hello_world/hello_blob.o 00:03:44.171 TEST_HEADER include/spdk/lvol.h 00:03:44.171 CC test/blobfs/mkfs/mkfs.o 00:03:44.171 TEST_HEADER include/spdk/memory.h 00:03:44.171 CC test/bdev/bdevio/bdevio.o 00:03:44.171 TEST_HEADER include/spdk/mmio.h 00:03:44.171 TEST_HEADER include/spdk/nbd.h 00:03:44.171 CC test/dma/test_dma/test_dma.o 00:03:44.171 TEST_HEADER include/spdk/notify.h 00:03:44.171 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.171 CC examples/nvmf/nvmf/nvmf.o 00:03:44.171 TEST_HEADER include/spdk/nvme.h 00:03:44.171 CC examples/thread/thread/thread_ex.o 00:03:44.171 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.171 CC test/lvol/esnap/esnap.o 00:03:44.171 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.171 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.171 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.171 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.171 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.171 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.171 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.171 TEST_HEADER include/spdk/nvmf.h 00:03:44.171 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.171 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.171 TEST_HEADER include/spdk/opal.h 00:03:44.171 TEST_HEADER include/spdk/opal_spec.h 00:03:44.171 TEST_HEADER include/spdk/pci_ids.h 00:03:44.171 TEST_HEADER include/spdk/pipe.h 00:03:44.171 TEST_HEADER include/spdk/queue.h 00:03:44.171 TEST_HEADER include/spdk/reduce.h 00:03:44.171 TEST_HEADER include/spdk/rpc.h 00:03:44.171 TEST_HEADER include/spdk/scheduler.h 00:03:44.171 TEST_HEADER include/spdk/scsi.h 00:03:44.171 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.171 TEST_HEADER include/spdk/sock.h 00:03:44.171 TEST_HEADER include/spdk/stdinc.h 00:03:44.171 TEST_HEADER include/spdk/string.h 00:03:44.171 TEST_HEADER include/spdk/thread.h 00:03:44.171 TEST_HEADER include/spdk/trace.h 00:03:44.171 TEST_HEADER include/spdk/trace_parser.h 00:03:44.171 TEST_HEADER include/spdk/tree.h 00:03:44.171 TEST_HEADER include/spdk/ublk.h 00:03:44.171 TEST_HEADER include/spdk/util.h 00:03:44.171 TEST_HEADER include/spdk/uuid.h 00:03:44.171 TEST_HEADER include/spdk/version.h 00:03:44.171 LINK spdk_lspci 00:03:44.429 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.429 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.429 TEST_HEADER include/spdk/vhost.h 00:03:44.429 TEST_HEADER include/spdk/vmd.h 00:03:44.429 TEST_HEADER include/spdk/xor.h 00:03:44.429 TEST_HEADER include/spdk/zipf.h 00:03:44.429 CXX test/cpp_headers/accel.o 00:03:44.429 LINK rpc_client_test 00:03:44.429 LINK lsvmd 00:03:44.429 LINK jsoncat 00:03:44.429 LINK spdk_nvme_discover 00:03:44.429 LINK poller_perf 00:03:44.429 LINK histogram_perf 00:03:44.429 LINK event_perf 00:03:44.429 LINK zipf 00:03:44.429 LINK interrupt_tgt 00:03:44.429 LINK nvmf_tgt 00:03:44.429 LINK stub 00:03:44.429 LINK cmb_copy 00:03:44.429 LINK vhost 00:03:44.429 LINK spdk_trace_record 00:03:44.429 LINK iscsi_tgt 00:03:44.429 LINK verify 00:03:44.429 LINK ioat_perf 00:03:44.429 LINK hello_world 00:03:44.429 LINK bdev_svc 00:03:44.429 LINK hotplug 00:03:44.429 LINK mkfs 00:03:44.429 LINK spdk_tgt 00:03:44.692 LINK hello_sock 00:03:44.692 LINK hello_bdev 00:03:44.692 LINK hello_blob 00:03:44.692 LINK aer 00:03:44.692 LINK thread 00:03:44.692 CXX test/cpp_headers/accel_module.o 00:03:44.692 LINK reconnect 00:03:44.692 LINK arbitration 00:03:44.692 LINK idxd_perf 00:03:44.692 CC examples/vmd/led/led.o 00:03:44.692 LINK nvmf 00:03:44.692 LINK spdk_dd 00:03:44.692 CC test/env/vtophys/vtophys.o 00:03:44.692 CXX test/cpp_headers/assert.o 00:03:44.692 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.692 CXX test/cpp_headers/barrier.o 00:03:44.692 CC test/event/reactor/reactor.o 00:03:44.692 LINK spdk_trace 00:03:44.692 CC test/nvme/reset/reset.o 00:03:44.692 LINK abort 00:03:44.692 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.959 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:44.959 CC examples/bdev/bdevperf/bdevperf.o 00:03:44.959 LINK bdevio 00:03:44.959 CC test/event/reactor_perf/reactor_perf.o 00:03:44.959 LINK test_dma 00:03:44.959 CXX test/cpp_headers/base64.o 00:03:44.959 LINK dif 00:03:44.959 CC app/fio/bdev/fio_plugin.o 00:03:44.959 CC test/nvme/sgl/sgl.o 00:03:44.959 CC test/nvme/e2edp/nvme_dp.o 00:03:44.959 CXX test/cpp_headers/bdev.o 00:03:44.959 CC test/env/memory/memory_ut.o 00:03:44.959 LINK accel_perf 00:03:44.959 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.959 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.959 CC test/nvme/overhead/overhead.o 00:03:44.959 CC test/event/app_repeat/app_repeat.o 00:03:44.959 LINK nvme_fuzz 00:03:44.959 CXX test/cpp_headers/bdev_module.o 00:03:44.959 LINK nvme_manage 00:03:44.959 CXX test/cpp_headers/bdev_zone.o 00:03:44.959 CC examples/blob/cli/blobcli.o 00:03:44.959 LINK led 00:03:44.959 CC test/env/pci/pci_ut.o 00:03:44.959 LINK vtophys 00:03:44.959 LINK reactor 00:03:45.224 CC test/nvme/err_injection/err_injection.o 00:03:45.224 LINK env_dpdk_post_init 00:03:45.224 LINK spdk_nvme 00:03:45.224 CXX test/cpp_headers/bit_array.o 00:03:45.224 CC test/nvme/reserve/reserve.o 00:03:45.224 CC test/nvme/startup/startup.o 00:03:45.224 CC test/event/scheduler/scheduler.o 00:03:45.224 LINK pmr_persistence 00:03:45.224 CC test/nvme/simple_copy/simple_copy.o 00:03:45.224 CC test/nvme/connect_stress/connect_stress.o 00:03:45.224 CC test/nvme/boot_partition/boot_partition.o 00:03:45.224 LINK reactor_perf 00:03:45.224 CC test/nvme/compliance/nvme_compliance.o 00:03:45.224 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.224 CXX test/cpp_headers/bit_pool.o 00:03:45.224 CC test/nvme/cuse/cuse.o 00:03:45.224 CXX test/cpp_headers/blob_bdev.o 00:03:45.224 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:45.224 CC test/nvme/fdp/fdp.o 00:03:45.224 CXX test/cpp_headers/blobfs_bdev.o 00:03:45.224 CXX test/cpp_headers/blobfs.o 00:03:45.224 CXX test/cpp_headers/blob.o 00:03:45.224 CXX test/cpp_headers/conf.o 00:03:45.224 LINK reset 00:03:45.224 CXX test/cpp_headers/config.o 00:03:45.224 LINK app_repeat 00:03:45.225 CXX test/cpp_headers/cpuset.o 00:03:45.486 CXX test/cpp_headers/crc16.o 00:03:45.486 CXX test/cpp_headers/crc32.o 00:03:45.486 CXX test/cpp_headers/crc64.o 00:03:45.486 CXX test/cpp_headers/dif.o 00:03:45.486 CXX test/cpp_headers/dma.o 00:03:45.486 LINK mem_callbacks 00:03:45.486 CXX test/cpp_headers/endian.o 00:03:45.486 CXX test/cpp_headers/env_dpdk.o 00:03:45.486 CXX test/cpp_headers/env.o 00:03:45.486 CXX test/cpp_headers/event.o 00:03:45.486 LINK sgl 00:03:45.486 LINK nvme_dp 00:03:45.486 CXX test/cpp_headers/fd_group.o 00:03:45.486 LINK err_injection 00:03:45.486 CXX test/cpp_headers/fd.o 00:03:45.486 CXX test/cpp_headers/file.o 00:03:45.487 LINK spdk_nvme_perf 00:03:45.487 LINK startup 00:03:45.487 LINK connect_stress 00:03:45.487 LINK overhead 00:03:45.487 LINK reserve 00:03:45.487 LINK boot_partition 00:03:45.487 LINK spdk_nvme_identify 00:03:45.487 LINK scheduler 00:03:45.750 CXX test/cpp_headers/ftl.o 00:03:45.750 CXX test/cpp_headers/gpt_spec.o 00:03:45.750 CXX test/cpp_headers/hexlify.o 00:03:45.750 LINK spdk_top 00:03:45.750 LINK simple_copy 00:03:45.750 CXX test/cpp_headers/histogram_data.o 00:03:45.750 CXX test/cpp_headers/idxd.o 00:03:45.750 CXX test/cpp_headers/idxd_spec.o 00:03:45.750 LINK fused_ordering 00:03:45.750 CXX test/cpp_headers/init.o 00:03:45.751 LINK doorbell_aers 00:03:45.751 CXX test/cpp_headers/ioat.o 00:03:45.751 CXX test/cpp_headers/ioat_spec.o 00:03:45.751 CXX test/cpp_headers/iscsi_spec.o 00:03:45.751 CXX test/cpp_headers/json.o 00:03:45.751 CXX test/cpp_headers/jsonrpc.o 00:03:45.751 CXX test/cpp_headers/likely.o 00:03:45.751 CXX test/cpp_headers/log.o 00:03:45.751 CXX test/cpp_headers/lvol.o 00:03:45.751 CXX test/cpp_headers/memory.o 00:03:45.751 LINK vhost_fuzz 00:03:45.751 LINK pci_ut 00:03:45.751 CXX test/cpp_headers/mmio.o 00:03:45.751 CXX test/cpp_headers/nbd.o 00:03:45.751 CXX test/cpp_headers/notify.o 00:03:45.751 CXX test/cpp_headers/nvme.o 00:03:45.751 CXX test/cpp_headers/nvme_intel.o 00:03:45.751 LINK spdk_bdev 00:03:45.751 CXX test/cpp_headers/nvme_ocssd.o 00:03:45.751 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:45.751 CXX test/cpp_headers/nvme_spec.o 00:03:45.751 CXX test/cpp_headers/nvme_zns.o 00:03:45.751 CXX test/cpp_headers/nvmf_cmd.o 00:03:45.751 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:45.751 CXX test/cpp_headers/nvmf.o 00:03:45.751 CXX test/cpp_headers/nvmf_spec.o 00:03:46.019 LINK nvme_compliance 00:03:46.019 CXX test/cpp_headers/nvmf_transport.o 00:03:46.019 LINK fdp 00:03:46.019 CXX test/cpp_headers/opal.o 00:03:46.019 CXX test/cpp_headers/opal_spec.o 00:03:46.019 CXX test/cpp_headers/pci_ids.o 00:03:46.019 CXX test/cpp_headers/pipe.o 00:03:46.019 CXX test/cpp_headers/queue.o 00:03:46.019 CXX test/cpp_headers/reduce.o 00:03:46.019 CXX test/cpp_headers/rpc.o 00:03:46.019 CXX test/cpp_headers/scheduler.o 00:03:46.019 LINK blobcli 00:03:46.019 CXX test/cpp_headers/scsi.o 00:03:46.019 CXX test/cpp_headers/scsi_spec.o 00:03:46.019 CXX test/cpp_headers/sock.o 00:03:46.019 CXX test/cpp_headers/stdinc.o 00:03:46.019 CXX test/cpp_headers/string.o 00:03:46.019 CXX test/cpp_headers/thread.o 00:03:46.019 CXX test/cpp_headers/trace.o 00:03:46.019 CXX test/cpp_headers/trace_parser.o 00:03:46.019 CXX test/cpp_headers/tree.o 00:03:46.019 CXX test/cpp_headers/ublk.o 00:03:46.019 CXX test/cpp_headers/util.o 00:03:46.019 CXX test/cpp_headers/uuid.o 00:03:46.019 CXX test/cpp_headers/version.o 00:03:46.019 CXX test/cpp_headers/vfio_user_pci.o 00:03:46.019 CXX test/cpp_headers/vfio_user_spec.o 00:03:46.019 CXX test/cpp_headers/vhost.o 00:03:46.019 CXX test/cpp_headers/vmd.o 00:03:46.019 CXX test/cpp_headers/xor.o 00:03:46.279 CXX test/cpp_headers/zipf.o 00:03:46.279 LINK bdevperf 00:03:46.537 LINK memory_ut 00:03:46.824 LINK cuse 00:03:47.082 LINK iscsi_fuzz 00:03:49.621 LINK esnap 00:03:49.881 00:03:49.881 real 0m38.136s 00:03:49.881 user 7m16.244s 00:03:49.881 sys 1m38.051s 00:03:49.881 03:36:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:49.881 03:36:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:49.881 ************************************ 00:03:49.881 END TEST make 00:03:49.881 ************************************ 00:03:50.140 03:36:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.140 03:36:08 -- nvmf/common.sh@7 -- # uname -s 00:03:50.140 03:36:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.140 03:36:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.140 03:36:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.140 03:36:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.140 03:36:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.140 03:36:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.140 03:36:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.140 03:36:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.140 03:36:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.140 03:36:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.140 03:36:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:50.140 03:36:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:50.140 03:36:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.140 03:36:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.140 03:36:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:50.140 03:36:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.141 03:36:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.141 03:36:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.141 03:36:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.141 03:36:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.141 03:36:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.141 03:36:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.141 03:36:08 -- paths/export.sh@5 -- # export PATH 00:03:50.141 03:36:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.141 03:36:08 -- nvmf/common.sh@46 -- # : 0 00:03:50.141 03:36:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:50.141 03:36:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:50.141 03:36:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:50.141 03:36:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.141 03:36:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.141 03:36:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:50.141 03:36:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:50.141 03:36:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:50.141 03:36:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:50.141 03:36:08 -- spdk/autotest.sh@32 -- # uname -s 00:03:50.141 03:36:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:50.141 03:36:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:50.141 03:36:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.141 03:36:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:50.141 03:36:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:50.141 03:36:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:50.141 03:36:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:50.141 03:36:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:50.141 03:36:08 -- spdk/autotest.sh@48 -- # udevadm_pid=2227879 00:03:50.141 03:36:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:50.141 03:36:08 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:50.141 03:36:08 -- spdk/autotest.sh@54 -- # echo 2227883 00:03:50.141 03:36:08 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:50.141 03:36:08 -- spdk/autotest.sh@56 -- # echo 2227884 00:03:50.141 03:36:08 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:50.141 03:36:08 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:50.141 03:36:08 -- spdk/autotest.sh@60 -- # echo 2227886 00:03:50.141 03:36:08 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:50.141 03:36:08 -- spdk/autotest.sh@62 -- # echo 2227888 00:03:50.141 03:36:08 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:50.141 03:36:08 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.141 03:36:08 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:50.141 03:36:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:50.141 03:36:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.141 03:36:08 -- spdk/autotest.sh@70 -- # create_test_list 00:03:50.141 03:36:08 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:50.141 03:36:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.141 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:50.141 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:50.141 03:36:08 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:50.141 03:36:08 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.141 03:36:08 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.141 03:36:08 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:50.141 03:36:08 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.141 03:36:08 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:50.141 03:36:08 -- common/autotest_common.sh@1440 -- # uname 00:03:50.141 03:36:08 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:50.141 03:36:08 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:50.141 03:36:08 -- common/autotest_common.sh@1460 -- # uname 00:03:50.141 03:36:08 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:50.141 03:36:08 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:50.141 03:36:08 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:50.141 03:36:08 -- spdk/autotest.sh@83 -- # hash lcov 00:03:50.141 03:36:08 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:50.141 03:36:08 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:50.141 --rc lcov_branch_coverage=1 00:03:50.141 --rc lcov_function_coverage=1 00:03:50.141 --rc genhtml_branch_coverage=1 00:03:50.141 --rc genhtml_function_coverage=1 00:03:50.141 --rc genhtml_legend=1 00:03:50.141 --rc geninfo_all_blocks=1 00:03:50.141 ' 00:03:50.141 03:36:08 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:50.141 --rc lcov_branch_coverage=1 00:03:50.141 --rc lcov_function_coverage=1 00:03:50.141 --rc genhtml_branch_coverage=1 00:03:50.141 --rc genhtml_function_coverage=1 00:03:50.141 --rc genhtml_legend=1 00:03:50.141 --rc geninfo_all_blocks=1 00:03:50.141 ' 00:03:50.141 03:36:08 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:50.141 --rc lcov_branch_coverage=1 00:03:50.141 --rc lcov_function_coverage=1 00:03:50.141 --rc genhtml_branch_coverage=1 00:03:50.141 --rc genhtml_function_coverage=1 00:03:50.141 --rc genhtml_legend=1 00:03:50.141 --rc geninfo_all_blocks=1 00:03:50.141 --no-external' 00:03:50.141 03:36:08 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:50.141 --rc lcov_branch_coverage=1 00:03:50.141 --rc lcov_function_coverage=1 00:03:50.141 --rc genhtml_branch_coverage=1 00:03:50.141 --rc genhtml_function_coverage=1 00:03:50.141 --rc genhtml_legend=1 00:03:50.141 --rc geninfo_all_blocks=1 00:03:50.141 --no-external' 00:03:50.141 03:36:08 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:50.141 lcov: LCOV version 1.14 00:03:50.141 03:36:09 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:52.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:52.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:52.045 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:06.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:06.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:06.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:06.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:06.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:06.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:21.783 03:36:40 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:21.783 03:36:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.783 03:36:40 -- common/autotest_common.sh@10 -- # set +x 00:04:21.783 03:36:40 -- spdk/autotest.sh@102 -- # rm -f 00:04:21.783 03:36:40 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.159 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:23.159 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:23.159 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:23.159 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:23.159 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:23.159 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:23.159 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:23.159 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:23.159 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:23.159 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:23.159 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:23.159 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:23.159 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:23.159 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:23.159 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:23.159 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:23.159 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:23.159 03:36:42 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:23.159 03:36:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:23.159 03:36:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:23.159 03:36:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:23.159 03:36:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:23.159 03:36:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:23.159 03:36:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:23.159 03:36:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.159 03:36:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:23.159 03:36:42 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:23.159 03:36:42 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:23.159 03:36:42 -- spdk/autotest.sh@121 -- # grep -v p 00:04:23.159 03:36:42 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:23.159 03:36:42 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:23.159 03:36:42 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:23.159 03:36:42 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:23.160 03:36:42 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.418 No valid GPT data, bailing 00:04:23.418 03:36:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.418 03:36:42 -- scripts/common.sh@393 -- # pt= 00:04:23.418 03:36:42 -- scripts/common.sh@394 -- # return 1 00:04:23.418 03:36:42 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.418 1+0 records in 00:04:23.418 1+0 records out 00:04:23.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00174336 s, 601 MB/s 00:04:23.418 03:36:42 -- spdk/autotest.sh@129 -- # sync 00:04:23.418 03:36:42 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.418 03:36:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.418 03:36:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.350 03:36:43 -- spdk/autotest.sh@135 -- # uname -s 00:04:25.350 03:36:43 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:25.350 03:36:43 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:25.350 03:36:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.350 03:36:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.350 03:36:43 -- common/autotest_common.sh@10 -- # set +x 00:04:25.350 ************************************ 00:04:25.350 START TEST setup.sh 00:04:25.350 ************************************ 00:04:25.350 03:36:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:25.350 * Looking for test storage... 00:04:25.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.350 03:36:44 -- setup/test-setup.sh@10 -- # uname -s 00:04:25.350 03:36:44 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:25.350 03:36:44 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:25.350 03:36:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.350 03:36:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.350 03:36:44 -- common/autotest_common.sh@10 -- # set +x 00:04:25.350 ************************************ 00:04:25.350 START TEST acl 00:04:25.350 ************************************ 00:04:25.350 03:36:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:25.350 * Looking for test storage... 00:04:25.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.350 03:36:44 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:25.350 03:36:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:25.350 03:36:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:25.350 03:36:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:25.351 03:36:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:25.351 03:36:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:25.351 03:36:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:25.351 03:36:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.351 03:36:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:25.351 03:36:44 -- setup/acl.sh@12 -- # devs=() 00:04:25.351 03:36:44 -- setup/acl.sh@12 -- # declare -a devs 00:04:25.351 03:36:44 -- setup/acl.sh@13 -- # drivers=() 00:04:25.351 03:36:44 -- setup/acl.sh@13 -- # declare -A drivers 00:04:25.351 03:36:44 -- setup/acl.sh@51 -- # setup reset 00:04:25.351 03:36:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.351 03:36:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.729 03:36:45 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:26.729 03:36:45 -- setup/acl.sh@16 -- # local dev driver 00:04:26.729 03:36:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:26.730 03:36:45 -- setup/acl.sh@15 -- # setup output status 00:04:26.730 03:36:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.730 03:36:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.668 Hugepages 00:04:27.668 node hugesize free / total 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 00:04:27.668 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.668 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:27.668 03:36:46 -- setup/acl.sh@20 -- # continue 00:04:27.668 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.927 03:36:46 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:27.927 03:36:46 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:27.927 03:36:46 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:27.927 03:36:46 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:27.927 03:36:46 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:27.927 03:36:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.927 03:36:46 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:27.927 03:36:46 -- setup/acl.sh@54 -- # run_test denied denied 00:04:27.927 03:36:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.927 03:36:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.927 03:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:27.927 ************************************ 00:04:27.927 START TEST denied 00:04:27.927 ************************************ 00:04:27.927 03:36:46 -- common/autotest_common.sh@1104 -- # denied 00:04:27.927 03:36:46 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:27.927 03:36:46 -- setup/acl.sh@38 -- # setup output config 00:04:27.927 03:36:46 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:27.927 03:36:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.927 03:36:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.305 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:29.305 03:36:48 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:29.305 03:36:48 -- setup/acl.sh@28 -- # local dev driver 00:04:29.305 03:36:48 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:29.305 03:36:48 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:29.305 03:36:48 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:29.305 03:36:48 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:29.305 03:36:48 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:29.305 03:36:48 -- setup/acl.sh@41 -- # setup reset 00:04:29.305 03:36:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.305 03:36:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.842 00:04:31.842 real 0m3.785s 00:04:31.842 user 0m1.154s 00:04:31.842 sys 0m1.723s 00:04:31.842 03:36:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.842 03:36:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.842 ************************************ 00:04:31.842 END TEST denied 00:04:31.842 ************************************ 00:04:31.842 03:36:50 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:31.842 03:36:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.842 03:36:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.842 03:36:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.842 ************************************ 00:04:31.842 START TEST allowed 00:04:31.842 ************************************ 00:04:31.842 03:36:50 -- common/autotest_common.sh@1104 -- # allowed 00:04:31.842 03:36:50 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:31.842 03:36:50 -- setup/acl.sh@45 -- # setup output config 00:04:31.842 03:36:50 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:31.842 03:36:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.842 03:36:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.379 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.379 03:36:52 -- setup/acl.sh@47 -- # verify 00:04:34.379 03:36:52 -- setup/acl.sh@28 -- # local dev driver 00:04:34.379 03:36:52 -- setup/acl.sh@48 -- # setup reset 00:04:34.379 03:36:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.379 03:36:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.756 00:04:35.756 real 0m3.949s 00:04:35.756 user 0m1.021s 00:04:35.756 sys 0m1.699s 00:04:35.756 03:36:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.756 03:36:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.756 ************************************ 00:04:35.756 END TEST allowed 00:04:35.756 ************************************ 00:04:35.756 00:04:35.756 real 0m10.347s 00:04:35.756 user 0m3.157s 00:04:35.756 sys 0m5.136s 00:04:35.756 03:36:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.756 03:36:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.756 ************************************ 00:04:35.756 END TEST acl 00:04:35.756 ************************************ 00:04:35.756 03:36:54 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.756 03:36:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.756 03:36:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.756 03:36:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.756 ************************************ 00:04:35.756 START TEST hugepages 00:04:35.756 ************************************ 00:04:35.756 03:36:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:35.756 * Looking for test storage... 00:04:35.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:35.756 03:36:54 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:35.756 03:36:54 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:35.756 03:36:54 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:35.756 03:36:54 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:35.756 03:36:54 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:35.756 03:36:54 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:35.756 03:36:54 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:35.756 03:36:54 -- setup/common.sh@18 -- # local node= 00:04:35.756 03:36:54 -- setup/common.sh@19 -- # local var val 00:04:35.756 03:36:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.756 03:36:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.756 03:36:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.757 03:36:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.757 03:36:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.757 03:36:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41177008 kB' 'MemAvailable: 44686632 kB' 'Buffers: 2704 kB' 'Cached: 12753848 kB' 'SwapCached: 0 kB' 'Active: 9753620 kB' 'Inactive: 3506552 kB' 'Active(anon): 9359268 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507324 kB' 'Mapped: 206548 kB' 'Shmem: 8855648 kB' 'KReclaimable: 204688 kB' 'Slab: 581144 kB' 'SReclaimable: 204688 kB' 'SUnreclaim: 376456 kB' 'KernelStack: 12800 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10485772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.757 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.757 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # continue 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.758 03:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.758 03:36:54 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:35.758 03:36:54 -- setup/common.sh@33 -- # echo 2048 00:04:35.758 03:36:54 -- setup/common.sh@33 -- # return 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:35.758 03:36:54 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:35.758 03:36:54 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:35.758 03:36:54 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:35.758 03:36:54 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:35.758 03:36:54 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:35.758 03:36:54 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:35.758 03:36:54 -- setup/hugepages.sh@207 -- # get_nodes 00:04:35.758 03:36:54 -- setup/hugepages.sh@27 -- # local node 00:04:35.758 03:36:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.758 03:36:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:35.758 03:36:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.758 03:36:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.758 03:36:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.758 03:36:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.758 03:36:54 -- setup/hugepages.sh@208 -- # clear_hp 00:04:35.758 03:36:54 -- setup/hugepages.sh@37 -- # local node hp 00:04:35.758 03:36:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.758 03:36:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.758 03:36:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.758 03:36:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.758 03:36:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.758 03:36:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.758 03:36:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:35.758 03:36:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:35.758 03:36:54 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:35.758 03:36:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.758 03:36:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.758 03:36:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.758 ************************************ 00:04:35.758 START TEST default_setup 00:04:35.758 ************************************ 00:04:35.758 03:36:54 -- common/autotest_common.sh@1104 -- # default_setup 00:04:35.758 03:36:54 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.758 03:36:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.758 03:36:54 -- setup/hugepages.sh@51 -- # shift 00:04:35.758 03:36:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.758 03:36:54 -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.758 03:36:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.758 03:36:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.758 03:36:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.758 03:36:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.758 03:36:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.758 03:36:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.758 03:36:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.758 03:36:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.758 03:36:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.758 03:36:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.758 03:36:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.758 03:36:54 -- setup/hugepages.sh@73 -- # return 0 00:04:35.758 03:36:54 -- setup/hugepages.sh@137 -- # setup output 00:04:35.758 03:36:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.758 03:36:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.136 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.136 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.136 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.136 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.136 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.136 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.136 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.136 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.136 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:38.080 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.080 03:36:56 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:38.080 03:36:56 -- setup/hugepages.sh@89 -- # local node 00:04:38.080 03:36:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.080 03:36:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.080 03:36:56 -- setup/hugepages.sh@92 -- # local surp 00:04:38.080 03:36:56 -- setup/hugepages.sh@93 -- # local resv 00:04:38.080 03:36:56 -- setup/hugepages.sh@94 -- # local anon 00:04:38.080 03:36:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.080 03:36:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.080 03:36:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.080 03:36:56 -- setup/common.sh@18 -- # local node= 00:04:38.080 03:36:56 -- setup/common.sh@19 -- # local var val 00:04:38.080 03:36:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.080 03:36:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.080 03:36:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.080 03:36:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.080 03:36:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.080 03:36:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43298456 kB' 'MemAvailable: 46808096 kB' 'Buffers: 2704 kB' 'Cached: 12753940 kB' 'SwapCached: 0 kB' 'Active: 9772668 kB' 'Inactive: 3506552 kB' 'Active(anon): 9378316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525936 kB' 'Mapped: 206576 kB' 'Shmem: 8855740 kB' 'KReclaimable: 204720 kB' 'Slab: 580652 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 375932 kB' 'KernelStack: 13024 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.080 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.080 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.081 03:36:56 -- setup/common.sh@33 -- # echo 0 00:04:38.081 03:36:56 -- setup/common.sh@33 -- # return 0 00:04:38.081 03:36:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:38.081 03:36:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.081 03:36:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.081 03:36:56 -- setup/common.sh@18 -- # local node= 00:04:38.081 03:36:56 -- setup/common.sh@19 -- # local var val 00:04:38.081 03:36:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.081 03:36:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.081 03:36:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.081 03:36:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.081 03:36:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.081 03:36:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43309732 kB' 'MemAvailable: 46819372 kB' 'Buffers: 2704 kB' 'Cached: 12753948 kB' 'SwapCached: 0 kB' 'Active: 9771176 kB' 'Inactive: 3506552 kB' 'Active(anon): 9376824 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524332 kB' 'Mapped: 206516 kB' 'Shmem: 8855748 kB' 'KReclaimable: 204720 kB' 'Slab: 580880 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376160 kB' 'KernelStack: 12848 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.081 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.081 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.082 03:36:56 -- setup/common.sh@33 -- # echo 0 00:04:38.082 03:36:56 -- setup/common.sh@33 -- # return 0 00:04:38.082 03:36:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:38.082 03:36:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.082 03:36:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.082 03:36:56 -- setup/common.sh@18 -- # local node= 00:04:38.082 03:36:56 -- setup/common.sh@19 -- # local var val 00:04:38.082 03:36:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.082 03:36:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.082 03:36:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.082 03:36:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.082 03:36:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.082 03:36:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43309776 kB' 'MemAvailable: 46819416 kB' 'Buffers: 2704 kB' 'Cached: 12753960 kB' 'SwapCached: 0 kB' 'Active: 9771116 kB' 'Inactive: 3506552 kB' 'Active(anon): 9376764 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524340 kB' 'Mapped: 206516 kB' 'Shmem: 8855760 kB' 'KReclaimable: 204720 kB' 'Slab: 580964 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376244 kB' 'KernelStack: 12784 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.082 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.082 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.083 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.083 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.084 03:36:56 -- setup/common.sh@33 -- # echo 0 00:04:38.084 03:36:56 -- setup/common.sh@33 -- # return 0 00:04:38.084 03:36:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:38.084 03:36:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.084 nr_hugepages=1024 00:04:38.084 03:36:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.084 resv_hugepages=0 00:04:38.084 03:36:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.084 surplus_hugepages=0 00:04:38.084 03:36:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.084 anon_hugepages=0 00:04:38.084 03:36:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.084 03:36:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.084 03:36:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.084 03:36:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.084 03:36:56 -- setup/common.sh@18 -- # local node= 00:04:38.084 03:36:56 -- setup/common.sh@19 -- # local var val 00:04:38.084 03:36:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.084 03:36:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.084 03:36:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.084 03:36:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.084 03:36:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.084 03:36:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43309612 kB' 'MemAvailable: 46819252 kB' 'Buffers: 2704 kB' 'Cached: 12753972 kB' 'SwapCached: 0 kB' 'Active: 9771612 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377260 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524772 kB' 'Mapped: 206952 kB' 'Shmem: 8855772 kB' 'KReclaimable: 204720 kB' 'Slab: 580964 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376244 kB' 'KernelStack: 12784 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10507472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.084 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.084 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.085 03:36:56 -- setup/common.sh@33 -- # echo 1024 00:04:38.085 03:36:56 -- setup/common.sh@33 -- # return 0 00:04:38.085 03:36:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.085 03:36:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.085 03:36:56 -- setup/hugepages.sh@27 -- # local node 00:04:38.085 03:36:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.085 03:36:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.085 03:36:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.085 03:36:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.085 03:36:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.085 03:36:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.085 03:36:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.085 03:36:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.085 03:36:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.085 03:36:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.085 03:36:56 -- setup/common.sh@18 -- # local node=0 00:04:38.085 03:36:56 -- setup/common.sh@19 -- # local var val 00:04:38.085 03:36:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.085 03:36:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.085 03:36:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.085 03:36:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.085 03:36:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.085 03:36:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25273980 kB' 'MemUsed: 7555904 kB' 'SwapCached: 0 kB' 'Active: 4166088 kB' 'Inactive: 108416 kB' 'Active(anon): 4055200 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4030996 kB' 'Mapped: 36796 kB' 'AnonPages: 246676 kB' 'Shmem: 3811692 kB' 'KernelStack: 7752 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98916 kB' 'Slab: 322252 kB' 'SReclaimable: 98916 kB' 'SUnreclaim: 223336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.085 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.085 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # continue 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.086 03:36:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.086 03:36:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.086 03:36:56 -- setup/common.sh@33 -- # echo 0 00:04:38.086 03:36:56 -- setup/common.sh@33 -- # return 0 00:04:38.086 03:36:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.086 03:36:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.086 03:36:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.086 03:36:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.086 03:36:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.086 node0=1024 expecting 1024 00:04:38.086 03:36:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.086 00:04:38.086 real 0m2.423s 00:04:38.086 user 0m0.655s 00:04:38.086 sys 0m0.906s 00:04:38.086 03:36:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.086 03:36:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.086 ************************************ 00:04:38.086 END TEST default_setup 00:04:38.086 ************************************ 00:04:38.086 03:36:56 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:38.086 03:36:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.086 03:36:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.086 03:36:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.086 ************************************ 00:04:38.086 START TEST per_node_1G_alloc 00:04:38.086 ************************************ 00:04:38.086 03:36:56 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:38.086 03:36:56 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:38.086 03:36:56 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:38.086 03:36:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.086 03:36:56 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:38.086 03:36:56 -- setup/hugepages.sh@51 -- # shift 00:04:38.086 03:36:56 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:38.086 03:36:56 -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.086 03:36:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.086 03:36:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.086 03:36:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:38.086 03:36:56 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:38.086 03:36:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.086 03:36:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.086 03:36:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.086 03:36:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.086 03:36:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.086 03:36:56 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:38.087 03:36:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.087 03:36:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.087 03:36:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.087 03:36:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.087 03:36:56 -- setup/hugepages.sh@73 -- # return 0 00:04:38.087 03:36:56 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:38.087 03:36:56 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:38.087 03:36:56 -- setup/hugepages.sh@146 -- # setup output 00:04:38.087 03:36:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.087 03:36:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.469 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.469 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.469 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.469 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.469 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.469 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.469 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.469 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.469 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.469 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.469 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.469 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.469 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.469 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.469 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.469 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.469 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.469 03:36:58 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:39.469 03:36:58 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:39.470 03:36:58 -- setup/hugepages.sh@89 -- # local node 00:04:39.470 03:36:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.470 03:36:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.470 03:36:58 -- setup/hugepages.sh@92 -- # local surp 00:04:39.470 03:36:58 -- setup/hugepages.sh@93 -- # local resv 00:04:39.470 03:36:58 -- setup/hugepages.sh@94 -- # local anon 00:04:39.470 03:36:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.470 03:36:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.470 03:36:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.470 03:36:58 -- setup/common.sh@18 -- # local node= 00:04:39.470 03:36:58 -- setup/common.sh@19 -- # local var val 00:04:39.470 03:36:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.470 03:36:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.470 03:36:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.470 03:36:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.470 03:36:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.470 03:36:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43314612 kB' 'MemAvailable: 46824252 kB' 'Buffers: 2704 kB' 'Cached: 12754020 kB' 'SwapCached: 0 kB' 'Active: 9771996 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525136 kB' 'Mapped: 206540 kB' 'Shmem: 8855820 kB' 'KReclaimable: 204720 kB' 'Slab: 580804 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376084 kB' 'KernelStack: 12816 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.470 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.470 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.471 03:36:58 -- setup/common.sh@33 -- # echo 0 00:04:39.471 03:36:58 -- setup/common.sh@33 -- # return 0 00:04:39.471 03:36:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:39.471 03:36:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.471 03:36:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.471 03:36:58 -- setup/common.sh@18 -- # local node= 00:04:39.471 03:36:58 -- setup/common.sh@19 -- # local var val 00:04:39.471 03:36:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.471 03:36:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.471 03:36:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.471 03:36:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.471 03:36:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.471 03:36:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43314612 kB' 'MemAvailable: 46824252 kB' 'Buffers: 2704 kB' 'Cached: 12754024 kB' 'SwapCached: 0 kB' 'Active: 9771564 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377212 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524692 kB' 'Mapped: 206524 kB' 'Shmem: 8855824 kB' 'KReclaimable: 204720 kB' 'Slab: 580804 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376084 kB' 'KernelStack: 12832 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.471 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.471 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.472 03:36:58 -- setup/common.sh@33 -- # echo 0 00:04:39.472 03:36:58 -- setup/common.sh@33 -- # return 0 00:04:39.472 03:36:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:39.472 03:36:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.472 03:36:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.472 03:36:58 -- setup/common.sh@18 -- # local node= 00:04:39.472 03:36:58 -- setup/common.sh@19 -- # local var val 00:04:39.472 03:36:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.472 03:36:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.472 03:36:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.472 03:36:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.472 03:36:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.472 03:36:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43314732 kB' 'MemAvailable: 46824372 kB' 'Buffers: 2704 kB' 'Cached: 12754036 kB' 'SwapCached: 0 kB' 'Active: 9771488 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377136 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524580 kB' 'Mapped: 206524 kB' 'Shmem: 8855836 kB' 'KReclaimable: 204720 kB' 'Slab: 580860 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376140 kB' 'KernelStack: 12816 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.472 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.472 03:36:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.473 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.473 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.474 03:36:58 -- setup/common.sh@33 -- # echo 0 00:04:39.474 03:36:58 -- setup/common.sh@33 -- # return 0 00:04:39.474 03:36:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:39.474 03:36:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.474 nr_hugepages=1024 00:04:39.474 03:36:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.474 resv_hugepages=0 00:04:39.474 03:36:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.474 surplus_hugepages=0 00:04:39.474 03:36:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.474 anon_hugepages=0 00:04:39.474 03:36:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.474 03:36:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.474 03:36:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.474 03:36:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.474 03:36:58 -- setup/common.sh@18 -- # local node= 00:04:39.474 03:36:58 -- setup/common.sh@19 -- # local var val 00:04:39.474 03:36:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.474 03:36:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.474 03:36:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.474 03:36:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.474 03:36:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.474 03:36:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43314988 kB' 'MemAvailable: 46824628 kB' 'Buffers: 2704 kB' 'Cached: 12754052 kB' 'SwapCached: 0 kB' 'Active: 9771516 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524576 kB' 'Mapped: 206524 kB' 'Shmem: 8855852 kB' 'KReclaimable: 204720 kB' 'Slab: 580860 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376140 kB' 'KernelStack: 12816 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.474 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.474 03:36:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.475 03:36:58 -- setup/common.sh@33 -- # echo 1024 00:04:39.475 03:36:58 -- setup/common.sh@33 -- # return 0 00:04:39.475 03:36:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.475 03:36:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.475 03:36:58 -- setup/hugepages.sh@27 -- # local node 00:04:39.475 03:36:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.475 03:36:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.475 03:36:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.475 03:36:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.475 03:36:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.475 03:36:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.475 03:36:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.475 03:36:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.475 03:36:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.475 03:36:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.475 03:36:58 -- setup/common.sh@18 -- # local node=0 00:04:39.475 03:36:58 -- setup/common.sh@19 -- # local var val 00:04:39.475 03:36:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.475 03:36:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.475 03:36:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.475 03:36:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.475 03:36:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.475 03:36:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26334776 kB' 'MemUsed: 6495108 kB' 'SwapCached: 0 kB' 'Active: 4162064 kB' 'Inactive: 108416 kB' 'Active(anon): 4051176 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4030996 kB' 'Mapped: 36368 kB' 'AnonPages: 242672 kB' 'Shmem: 3811692 kB' 'KernelStack: 7784 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98916 kB' 'Slab: 322140 kB' 'SReclaimable: 98916 kB' 'SUnreclaim: 223224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.475 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.475 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.476 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.476 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.736 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.736 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.736 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.736 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.736 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.736 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.736 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.736 03:36:58 -- setup/common.sh@33 -- # echo 0 00:04:39.736 03:36:58 -- setup/common.sh@33 -- # return 0 00:04:39.736 03:36:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.736 03:36:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.736 03:36:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.736 03:36:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.736 03:36:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.736 03:36:58 -- setup/common.sh@18 -- # local node=1 00:04:39.736 03:36:58 -- setup/common.sh@19 -- # local var val 00:04:39.736 03:36:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.736 03:36:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.737 03:36:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.737 03:36:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.737 03:36:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.737 03:36:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16981060 kB' 'MemUsed: 10730764 kB' 'SwapCached: 0 kB' 'Active: 5609240 kB' 'Inactive: 3398136 kB' 'Active(anon): 5325776 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8725784 kB' 'Mapped: 170156 kB' 'AnonPages: 281660 kB' 'Shmem: 5044184 kB' 'KernelStack: 5000 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105804 kB' 'Slab: 258720 kB' 'SReclaimable: 105804 kB' 'SUnreclaim: 152916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # continue 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.737 03:36:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.737 03:36:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.737 03:36:58 -- setup/common.sh@33 -- # echo 0 00:04:39.737 03:36:58 -- setup/common.sh@33 -- # return 0 00:04:39.737 03:36:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.737 03:36:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.737 03:36:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.737 03:36:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.737 03:36:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.737 node0=512 expecting 512 00:04:39.737 03:36:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.738 03:36:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.738 03:36:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.738 03:36:58 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:39.738 node1=512 expecting 512 00:04:39.738 03:36:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.738 00:04:39.738 real 0m1.451s 00:04:39.738 user 0m0.650s 00:04:39.738 sys 0m0.767s 00:04:39.738 03:36:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.738 03:36:58 -- common/autotest_common.sh@10 -- # set +x 00:04:39.738 ************************************ 00:04:39.738 END TEST per_node_1G_alloc 00:04:39.738 ************************************ 00:04:39.738 03:36:58 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:39.738 03:36:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.738 03:36:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.738 03:36:58 -- common/autotest_common.sh@10 -- # set +x 00:04:39.738 ************************************ 00:04:39.738 START TEST even_2G_alloc 00:04:39.738 ************************************ 00:04:39.738 03:36:58 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:39.738 03:36:58 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:39.738 03:36:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.738 03:36:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.738 03:36:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.738 03:36:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.738 03:36:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.738 03:36:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.738 03:36:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.738 03:36:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.738 03:36:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.738 03:36:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.738 03:36:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.738 03:36:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.738 03:36:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.738 03:36:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.738 03:36:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.738 03:36:58 -- setup/hugepages.sh@83 -- # : 512 00:04:39.738 03:36:58 -- setup/hugepages.sh@84 -- # : 1 00:04:39.738 03:36:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.738 03:36:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.738 03:36:58 -- setup/hugepages.sh@83 -- # : 0 00:04:39.738 03:36:58 -- setup/hugepages.sh@84 -- # : 0 00:04:39.738 03:36:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.738 03:36:58 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:39.738 03:36:58 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:39.738 03:36:58 -- setup/hugepages.sh@153 -- # setup output 00:04:39.738 03:36:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.738 03:36:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.121 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.121 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.121 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.121 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.121 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.121 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.121 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.121 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.121 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.121 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.121 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.121 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.121 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.121 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.121 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.121 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.121 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.121 03:36:59 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:41.121 03:36:59 -- setup/hugepages.sh@89 -- # local node 00:04:41.121 03:36:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.121 03:36:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.121 03:36:59 -- setup/hugepages.sh@92 -- # local surp 00:04:41.121 03:36:59 -- setup/hugepages.sh@93 -- # local resv 00:04:41.121 03:36:59 -- setup/hugepages.sh@94 -- # local anon 00:04:41.121 03:36:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.121 03:36:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.121 03:36:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.121 03:36:59 -- setup/common.sh@18 -- # local node= 00:04:41.121 03:36:59 -- setup/common.sh@19 -- # local var val 00:04:41.121 03:36:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.121 03:36:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.121 03:36:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.121 03:36:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.121 03:36:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.121 03:36:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.121 03:36:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43309828 kB' 'MemAvailable: 46819468 kB' 'Buffers: 2704 kB' 'Cached: 12754120 kB' 'SwapCached: 0 kB' 'Active: 9771808 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377456 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524768 kB' 'Mapped: 206556 kB' 'Shmem: 8855920 kB' 'KReclaimable: 204720 kB' 'Slab: 580856 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376136 kB' 'KernelStack: 12800 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.121 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.121 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.122 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.122 03:36:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.123 03:36:59 -- setup/common.sh@33 -- # echo 0 00:04:41.123 03:36:59 -- setup/common.sh@33 -- # return 0 00:04:41.123 03:36:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:41.123 03:36:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.123 03:36:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.123 03:36:59 -- setup/common.sh@18 -- # local node= 00:04:41.123 03:36:59 -- setup/common.sh@19 -- # local var val 00:04:41.123 03:36:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.123 03:36:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.123 03:36:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.123 03:36:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.123 03:36:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.123 03:36:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43317016 kB' 'MemAvailable: 46826656 kB' 'Buffers: 2704 kB' 'Cached: 12754120 kB' 'SwapCached: 0 kB' 'Active: 9772432 kB' 'Inactive: 3506552 kB' 'Active(anon): 9378080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525456 kB' 'Mapped: 206632 kB' 'Shmem: 8855920 kB' 'KReclaimable: 204720 kB' 'Slab: 580956 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376236 kB' 'KernelStack: 12832 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.123 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.123 03:36:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.124 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.124 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.124 03:36:59 -- setup/common.sh@33 -- # echo 0 00:04:41.125 03:36:59 -- setup/common.sh@33 -- # return 0 00:04:41.125 03:36:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:41.125 03:36:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.125 03:36:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.125 03:36:59 -- setup/common.sh@18 -- # local node= 00:04:41.125 03:36:59 -- setup/common.sh@19 -- # local var val 00:04:41.125 03:36:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.125 03:36:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.125 03:36:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.125 03:36:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.125 03:36:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.125 03:36:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43317068 kB' 'MemAvailable: 46826708 kB' 'Buffers: 2704 kB' 'Cached: 12754120 kB' 'SwapCached: 0 kB' 'Active: 9771404 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377052 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524396 kB' 'Mapped: 206528 kB' 'Shmem: 8855920 kB' 'KReclaimable: 204720 kB' 'Slab: 580972 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376252 kB' 'KernelStack: 12848 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.125 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.125 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.126 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.126 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.126 03:36:59 -- setup/common.sh@33 -- # echo 0 00:04:41.126 03:36:59 -- setup/common.sh@33 -- # return 0 00:04:41.126 03:36:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:41.126 03:36:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.126 nr_hugepages=1024 00:04:41.126 03:36:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.126 resv_hugepages=0 00:04:41.126 03:36:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.126 surplus_hugepages=0 00:04:41.126 03:36:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.126 anon_hugepages=0 00:04:41.126 03:36:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.126 03:36:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.126 03:36:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.126 03:36:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.126 03:36:59 -- setup/common.sh@18 -- # local node= 00:04:41.126 03:36:59 -- setup/common.sh@19 -- # local var val 00:04:41.126 03:36:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.127 03:36:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.127 03:36:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.127 03:36:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.127 03:36:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.127 03:36:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43316816 kB' 'MemAvailable: 46826456 kB' 'Buffers: 2704 kB' 'Cached: 12754148 kB' 'SwapCached: 0 kB' 'Active: 9771644 kB' 'Inactive: 3506552 kB' 'Active(anon): 9377292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524660 kB' 'Mapped: 206528 kB' 'Shmem: 8855948 kB' 'KReclaimable: 204720 kB' 'Slab: 580972 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376252 kB' 'KernelStack: 12848 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10506816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.127 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.127 03:36:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.128 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.128 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.129 03:36:59 -- setup/common.sh@33 -- # echo 1024 00:04:41.129 03:36:59 -- setup/common.sh@33 -- # return 0 00:04:41.129 03:36:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.129 03:36:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.129 03:36:59 -- setup/hugepages.sh@27 -- # local node 00:04:41.129 03:36:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.129 03:36:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.129 03:36:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.129 03:36:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.129 03:36:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.129 03:36:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.129 03:36:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.129 03:36:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.129 03:36:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.129 03:36:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.129 03:36:59 -- setup/common.sh@18 -- # local node=0 00:04:41.129 03:36:59 -- setup/common.sh@19 -- # local var val 00:04:41.129 03:36:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.129 03:36:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.129 03:36:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.129 03:36:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.129 03:36:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.129 03:36:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26339072 kB' 'MemUsed: 6490812 kB' 'SwapCached: 0 kB' 'Active: 4161944 kB' 'Inactive: 108416 kB' 'Active(anon): 4051056 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4031008 kB' 'Mapped: 36372 kB' 'AnonPages: 242516 kB' 'Shmem: 3811704 kB' 'KernelStack: 7800 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98916 kB' 'Slab: 322104 kB' 'SReclaimable: 98916 kB' 'SUnreclaim: 223188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.129 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.129 03:36:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@33 -- # echo 0 00:04:41.130 03:36:59 -- setup/common.sh@33 -- # return 0 00:04:41.130 03:36:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.130 03:36:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.130 03:36:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.130 03:36:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.130 03:36:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.130 03:36:59 -- setup/common.sh@18 -- # local node=1 00:04:41.130 03:36:59 -- setup/common.sh@19 -- # local var val 00:04:41.130 03:36:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.130 03:36:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.130 03:36:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.130 03:36:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.130 03:36:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.130 03:36:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16977492 kB' 'MemUsed: 10734332 kB' 'SwapCached: 0 kB' 'Active: 5610044 kB' 'Inactive: 3398136 kB' 'Active(anon): 5326580 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8725872 kB' 'Mapped: 170156 kB' 'AnonPages: 282452 kB' 'Shmem: 5044272 kB' 'KernelStack: 5064 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105804 kB' 'Slab: 258868 kB' 'SReclaimable: 105804 kB' 'SUnreclaim: 153064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.130 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.130 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # continue 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.131 03:36:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.131 03:36:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.131 03:36:59 -- setup/common.sh@33 -- # echo 0 00:04:41.131 03:36:59 -- setup/common.sh@33 -- # return 0 00:04:41.131 03:36:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.131 03:36:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.131 03:36:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.131 03:36:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.131 03:36:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.131 node0=512 expecting 512 00:04:41.131 03:36:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.131 03:36:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.131 03:36:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.131 03:36:59 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:41.131 node1=512 expecting 512 00:04:41.131 03:36:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.131 00:04:41.131 real 0m1.505s 00:04:41.131 user 0m0.656s 00:04:41.131 sys 0m0.815s 00:04:41.131 03:36:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.131 03:36:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.131 ************************************ 00:04:41.131 END TEST even_2G_alloc 00:04:41.131 ************************************ 00:04:41.131 03:36:59 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:41.131 03:36:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.131 03:36:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.131 03:36:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.131 ************************************ 00:04:41.131 START TEST odd_alloc 00:04:41.131 ************************************ 00:04:41.131 03:36:59 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:41.132 03:36:59 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:41.132 03:36:59 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:41.132 03:36:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.132 03:36:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.132 03:36:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:41.132 03:36:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.132 03:36:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.132 03:36:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.132 03:36:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:41.132 03:36:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.132 03:36:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.132 03:36:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.132 03:36:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.132 03:36:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.132 03:36:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.132 03:36:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.132 03:36:59 -- setup/hugepages.sh@83 -- # : 513 00:04:41.132 03:36:59 -- setup/hugepages.sh@84 -- # : 1 00:04:41.132 03:36:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.132 03:36:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:41.132 03:36:59 -- setup/hugepages.sh@83 -- # : 0 00:04:41.132 03:36:59 -- setup/hugepages.sh@84 -- # : 0 00:04:41.132 03:36:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.132 03:36:59 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:41.132 03:36:59 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:41.132 03:36:59 -- setup/hugepages.sh@160 -- # setup output 00:04:41.132 03:36:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.132 03:36:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.536 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.536 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.536 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.536 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.536 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.536 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.536 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.536 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.536 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.536 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.536 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.536 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.536 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.536 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.536 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.536 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.536 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.536 03:37:01 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:42.536 03:37:01 -- setup/hugepages.sh@89 -- # local node 00:04:42.536 03:37:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.536 03:37:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.536 03:37:01 -- setup/hugepages.sh@92 -- # local surp 00:04:42.536 03:37:01 -- setup/hugepages.sh@93 -- # local resv 00:04:42.536 03:37:01 -- setup/hugepages.sh@94 -- # local anon 00:04:42.536 03:37:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.536 03:37:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.536 03:37:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.536 03:37:01 -- setup/common.sh@18 -- # local node= 00:04:42.536 03:37:01 -- setup/common.sh@19 -- # local var val 00:04:42.536 03:37:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.537 03:37:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.537 03:37:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.537 03:37:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.537 03:37:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.537 03:37:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43297752 kB' 'MemAvailable: 46807392 kB' 'Buffers: 2704 kB' 'Cached: 12754212 kB' 'SwapCached: 0 kB' 'Active: 9771332 kB' 'Inactive: 3506552 kB' 'Active(anon): 9376980 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524348 kB' 'Mapped: 205688 kB' 'Shmem: 8856012 kB' 'KReclaimable: 204720 kB' 'Slab: 580908 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376188 kB' 'KernelStack: 13136 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10492788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196852 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.537 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.537 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.538 03:37:01 -- setup/common.sh@33 -- # echo 0 00:04:42.538 03:37:01 -- setup/common.sh@33 -- # return 0 00:04:42.538 03:37:01 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.538 03:37:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.538 03:37:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.538 03:37:01 -- setup/common.sh@18 -- # local node= 00:04:42.538 03:37:01 -- setup/common.sh@19 -- # local var val 00:04:42.538 03:37:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.538 03:37:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.538 03:37:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.538 03:37:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.538 03:37:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.538 03:37:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43299496 kB' 'MemAvailable: 46809136 kB' 'Buffers: 2704 kB' 'Cached: 12754216 kB' 'SwapCached: 0 kB' 'Active: 9769500 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375148 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522464 kB' 'Mapped: 205740 kB' 'Shmem: 8856016 kB' 'KReclaimable: 204720 kB' 'Slab: 580976 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376256 kB' 'KernelStack: 12816 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10492800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.538 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.538 03:37:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.539 03:37:01 -- setup/common.sh@33 -- # echo 0 00:04:42.539 03:37:01 -- setup/common.sh@33 -- # return 0 00:04:42.539 03:37:01 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.539 03:37:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.539 03:37:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.539 03:37:01 -- setup/common.sh@18 -- # local node= 00:04:42.539 03:37:01 -- setup/common.sh@19 -- # local var val 00:04:42.539 03:37:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.539 03:37:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.539 03:37:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.539 03:37:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.539 03:37:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.539 03:37:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43299896 kB' 'MemAvailable: 46809536 kB' 'Buffers: 2704 kB' 'Cached: 12754228 kB' 'SwapCached: 0 kB' 'Active: 9769476 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375124 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522348 kB' 'Mapped: 205664 kB' 'Shmem: 8856028 kB' 'KReclaimable: 204720 kB' 'Slab: 580960 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376240 kB' 'KernelStack: 12800 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10492448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.539 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.539 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.540 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.540 03:37:01 -- setup/common.sh@33 -- # echo 0 00:04:42.540 03:37:01 -- setup/common.sh@33 -- # return 0 00:04:42.540 03:37:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.540 03:37:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:42.540 nr_hugepages=1025 00:04:42.540 03:37:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.540 resv_hugepages=0 00:04:42.540 03:37:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.540 surplus_hugepages=0 00:04:42.540 03:37:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.540 anon_hugepages=0 00:04:42.540 03:37:01 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.540 03:37:01 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:42.540 03:37:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.540 03:37:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.540 03:37:01 -- setup/common.sh@18 -- # local node= 00:04:42.540 03:37:01 -- setup/common.sh@19 -- # local var val 00:04:42.540 03:37:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.540 03:37:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.540 03:37:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.540 03:37:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.540 03:37:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.540 03:37:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.540 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43302644 kB' 'MemAvailable: 46812284 kB' 'Buffers: 2704 kB' 'Cached: 12754244 kB' 'SwapCached: 0 kB' 'Active: 9769020 kB' 'Inactive: 3506552 kB' 'Active(anon): 9374668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521912 kB' 'Mapped: 205668 kB' 'Shmem: 8856044 kB' 'KReclaimable: 204720 kB' 'Slab: 580956 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376236 kB' 'KernelStack: 12768 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10492596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.541 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.541 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.542 03:37:01 -- setup/common.sh@33 -- # echo 1025 00:04:42.542 03:37:01 -- setup/common.sh@33 -- # return 0 00:04:42.542 03:37:01 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.542 03:37:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.542 03:37:01 -- setup/hugepages.sh@27 -- # local node 00:04:42.542 03:37:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.542 03:37:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.542 03:37:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.542 03:37:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:42.542 03:37:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.542 03:37:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.542 03:37:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.542 03:37:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.542 03:37:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.542 03:37:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.542 03:37:01 -- setup/common.sh@18 -- # local node=0 00:04:42.542 03:37:01 -- setup/common.sh@19 -- # local var val 00:04:42.542 03:37:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.542 03:37:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.542 03:37:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.542 03:37:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.542 03:37:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.542 03:37:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26331616 kB' 'MemUsed: 6498268 kB' 'SwapCached: 0 kB' 'Active: 4159492 kB' 'Inactive: 108416 kB' 'Active(anon): 4048604 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4031016 kB' 'Mapped: 35652 kB' 'AnonPages: 240128 kB' 'Shmem: 3811712 kB' 'KernelStack: 7800 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98916 kB' 'Slab: 322060 kB' 'SReclaimable: 98916 kB' 'SUnreclaim: 223144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.542 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.542 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@33 -- # echo 0 00:04:42.543 03:37:01 -- setup/common.sh@33 -- # return 0 00:04:42.543 03:37:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.543 03:37:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.543 03:37:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.543 03:37:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:42.543 03:37:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.543 03:37:01 -- setup/common.sh@18 -- # local node=1 00:04:42.543 03:37:01 -- setup/common.sh@19 -- # local var val 00:04:42.543 03:37:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.543 03:37:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.543 03:37:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:42.543 03:37:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:42.543 03:37:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.543 03:37:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16973128 kB' 'MemUsed: 10738696 kB' 'SwapCached: 0 kB' 'Active: 5609796 kB' 'Inactive: 3398136 kB' 'Active(anon): 5326332 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8725960 kB' 'Mapped: 170016 kB' 'AnonPages: 282032 kB' 'Shmem: 5044360 kB' 'KernelStack: 4984 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105804 kB' 'Slab: 258888 kB' 'SReclaimable: 105804 kB' 'SUnreclaim: 153084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.543 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.543 03:37:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.544 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.544 03:37:01 -- setup/common.sh@32 -- # continue 00:04:42.802 03:37:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.803 03:37:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.803 03:37:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.803 03:37:01 -- setup/common.sh@33 -- # echo 0 00:04:42.803 03:37:01 -- setup/common.sh@33 -- # return 0 00:04:42.803 03:37:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.803 03:37:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.803 03:37:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.803 03:37:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:42.803 node0=512 expecting 513 00:04:42.803 03:37:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.803 03:37:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.803 03:37:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.803 03:37:01 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:42.803 node1=513 expecting 512 00:04:42.803 03:37:01 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:42.803 00:04:42.803 real 0m1.490s 00:04:42.803 user 0m0.620s 00:04:42.803 sys 0m0.835s 00:04:42.803 03:37:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.803 03:37:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.803 ************************************ 00:04:42.803 END TEST odd_alloc 00:04:42.803 ************************************ 00:04:42.803 03:37:01 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:42.803 03:37:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.803 03:37:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.803 03:37:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.803 ************************************ 00:04:42.803 START TEST custom_alloc 00:04:42.803 ************************************ 00:04:42.803 03:37:01 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:42.803 03:37:01 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:42.803 03:37:01 -- setup/hugepages.sh@169 -- # local node 00:04:42.803 03:37:01 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:42.803 03:37:01 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:42.803 03:37:01 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:42.803 03:37:01 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:42.803 03:37:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.803 03:37:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.803 03:37:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.803 03:37:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.803 03:37:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.803 03:37:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.803 03:37:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.803 03:37:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.803 03:37:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.803 03:37:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.803 03:37:01 -- setup/hugepages.sh@83 -- # : 256 00:04:42.803 03:37:01 -- setup/hugepages.sh@84 -- # : 1 00:04:42.803 03:37:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.803 03:37:01 -- setup/hugepages.sh@83 -- # : 0 00:04:42.803 03:37:01 -- setup/hugepages.sh@84 -- # : 0 00:04:42.803 03:37:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:42.803 03:37:01 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:42.803 03:37:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.803 03:37:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.803 03:37:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.803 03:37:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.803 03:37:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.803 03:37:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.803 03:37:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.803 03:37:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.803 03:37:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.803 03:37:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.803 03:37:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.803 03:37:01 -- setup/hugepages.sh@78 -- # return 0 00:04:42.803 03:37:01 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:42.803 03:37:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.803 03:37:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.803 03:37:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.803 03:37:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.803 03:37:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:42.803 03:37:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.803 03:37:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.803 03:37:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.803 03:37:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.803 03:37:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.803 03:37:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.803 03:37:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:42.803 03:37:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.803 03:37:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.803 03:37:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.803 03:37:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:42.803 03:37:01 -- setup/hugepages.sh@78 -- # return 0 00:04:42.803 03:37:01 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:42.803 03:37:01 -- setup/hugepages.sh@187 -- # setup output 00:04:42.803 03:37:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.803 03:37:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.738 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.738 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.738 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.738 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.738 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.738 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.738 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.738 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.738 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:43.738 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:43.738 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:43.738 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:43.738 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:43.738 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:43.738 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:43.738 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:43.738 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.000 03:37:02 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:44.000 03:37:02 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:44.000 03:37:02 -- setup/hugepages.sh@89 -- # local node 00:04:44.000 03:37:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.000 03:37:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.000 03:37:02 -- setup/hugepages.sh@92 -- # local surp 00:04:44.000 03:37:02 -- setup/hugepages.sh@93 -- # local resv 00:04:44.000 03:37:02 -- setup/hugepages.sh@94 -- # local anon 00:04:44.000 03:37:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.000 03:37:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.000 03:37:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.000 03:37:02 -- setup/common.sh@18 -- # local node= 00:04:44.000 03:37:02 -- setup/common.sh@19 -- # local var val 00:04:44.000 03:37:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.000 03:37:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.000 03:37:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.000 03:37:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.000 03:37:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.000 03:37:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42232416 kB' 'MemAvailable: 45742056 kB' 'Buffers: 2704 kB' 'Cached: 12754308 kB' 'SwapCached: 0 kB' 'Active: 9769744 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375392 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522600 kB' 'Mapped: 205672 kB' 'Shmem: 8856108 kB' 'KReclaimable: 204720 kB' 'Slab: 581164 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376444 kB' 'KernelStack: 12784 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10493144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.000 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.000 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.001 03:37:02 -- setup/common.sh@33 -- # echo 0 00:04:44.001 03:37:02 -- setup/common.sh@33 -- # return 0 00:04:44.001 03:37:02 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.001 03:37:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.001 03:37:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.001 03:37:02 -- setup/common.sh@18 -- # local node= 00:04:44.001 03:37:02 -- setup/common.sh@19 -- # local var val 00:04:44.001 03:37:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.001 03:37:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.001 03:37:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.001 03:37:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.001 03:37:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.001 03:37:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42237648 kB' 'MemAvailable: 45747288 kB' 'Buffers: 2704 kB' 'Cached: 12754308 kB' 'SwapCached: 0 kB' 'Active: 9769812 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375460 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522744 kB' 'Mapped: 205672 kB' 'Shmem: 8856108 kB' 'KReclaimable: 204720 kB' 'Slab: 581164 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376444 kB' 'KernelStack: 12752 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10493156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.001 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.001 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.002 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.002 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.002 03:37:02 -- setup/common.sh@33 -- # echo 0 00:04:44.003 03:37:02 -- setup/common.sh@33 -- # return 0 00:04:44.003 03:37:02 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.003 03:37:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.003 03:37:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.003 03:37:02 -- setup/common.sh@18 -- # local node= 00:04:44.003 03:37:02 -- setup/common.sh@19 -- # local var val 00:04:44.003 03:37:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.003 03:37:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.003 03:37:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.003 03:37:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.003 03:37:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.003 03:37:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42237496 kB' 'MemAvailable: 45747136 kB' 'Buffers: 2704 kB' 'Cached: 12754320 kB' 'SwapCached: 0 kB' 'Active: 9769524 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375172 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522424 kB' 'Mapped: 205672 kB' 'Shmem: 8856120 kB' 'KReclaimable: 204720 kB' 'Slab: 581244 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376524 kB' 'KernelStack: 12832 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10493172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.003 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.003 03:37:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.004 03:37:02 -- setup/common.sh@33 -- # echo 0 00:04:44.004 03:37:02 -- setup/common.sh@33 -- # return 0 00:04:44.004 03:37:02 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.004 03:37:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:44.004 nr_hugepages=1536 00:04:44.004 03:37:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.004 resv_hugepages=0 00:04:44.004 03:37:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.004 surplus_hugepages=0 00:04:44.004 03:37:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.004 anon_hugepages=0 00:04:44.004 03:37:02 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.004 03:37:02 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:44.004 03:37:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.004 03:37:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.004 03:37:02 -- setup/common.sh@18 -- # local node= 00:04:44.004 03:37:02 -- setup/common.sh@19 -- # local var val 00:04:44.004 03:37:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.004 03:37:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.004 03:37:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.004 03:37:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.004 03:37:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.004 03:37:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42237496 kB' 'MemAvailable: 45747136 kB' 'Buffers: 2704 kB' 'Cached: 12754332 kB' 'SwapCached: 0 kB' 'Active: 9769376 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375024 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522212 kB' 'Mapped: 205672 kB' 'Shmem: 8856132 kB' 'KReclaimable: 204720 kB' 'Slab: 581244 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376524 kB' 'KernelStack: 12816 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10493184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.004 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.004 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.005 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.005 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.005 03:37:02 -- setup/common.sh@33 -- # echo 1536 00:04:44.005 03:37:02 -- setup/common.sh@33 -- # return 0 00:04:44.005 03:37:02 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.005 03:37:02 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.005 03:37:02 -- setup/hugepages.sh@27 -- # local node 00:04:44.005 03:37:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.005 03:37:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.005 03:37:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.005 03:37:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.005 03:37:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.005 03:37:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.005 03:37:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.005 03:37:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.005 03:37:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.005 03:37:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.005 03:37:02 -- setup/common.sh@18 -- # local node=0 00:04:44.005 03:37:02 -- setup/common.sh@19 -- # local var val 00:04:44.006 03:37:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.006 03:37:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.006 03:37:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.006 03:37:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.006 03:37:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.006 03:37:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26325840 kB' 'MemUsed: 6504044 kB' 'SwapCached: 0 kB' 'Active: 4160512 kB' 'Inactive: 108416 kB' 'Active(anon): 4049624 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4031072 kB' 'Mapped: 35656 kB' 'AnonPages: 241112 kB' 'Shmem: 3811768 kB' 'KernelStack: 7880 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98916 kB' 'Slab: 322200 kB' 'SReclaimable: 98916 kB' 'SUnreclaim: 223284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.006 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.006 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@33 -- # echo 0 00:04:44.266 03:37:02 -- setup/common.sh@33 -- # return 0 00:04:44.266 03:37:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.266 03:37:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.266 03:37:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.266 03:37:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.266 03:37:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.266 03:37:02 -- setup/common.sh@18 -- # local node=1 00:04:44.266 03:37:02 -- setup/common.sh@19 -- # local var val 00:04:44.266 03:37:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.266 03:37:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.266 03:37:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.266 03:37:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.266 03:37:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.266 03:37:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15911656 kB' 'MemUsed: 11800168 kB' 'SwapCached: 0 kB' 'Active: 5609040 kB' 'Inactive: 3398136 kB' 'Active(anon): 5325576 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8725984 kB' 'Mapped: 170016 kB' 'AnonPages: 281308 kB' 'Shmem: 5044384 kB' 'KernelStack: 4952 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105804 kB' 'Slab: 259044 kB' 'SReclaimable: 105804 kB' 'SUnreclaim: 153240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.266 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.266 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # continue 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.267 03:37:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.267 03:37:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.267 03:37:02 -- setup/common.sh@33 -- # echo 0 00:04:44.267 03:37:02 -- setup/common.sh@33 -- # return 0 00:04:44.267 03:37:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.267 03:37:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.267 03:37:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.267 03:37:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.267 03:37:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.267 node0=512 expecting 512 00:04:44.267 03:37:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.267 03:37:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.267 03:37:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.267 03:37:02 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:44.267 node1=1024 expecting 1024 00:04:44.267 03:37:02 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:44.267 00:04:44.267 real 0m1.462s 00:04:44.267 user 0m0.610s 00:04:44.267 sys 0m0.819s 00:04:44.267 03:37:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.267 03:37:02 -- common/autotest_common.sh@10 -- # set +x 00:04:44.267 ************************************ 00:04:44.267 END TEST custom_alloc 00:04:44.267 ************************************ 00:04:44.267 03:37:02 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:44.267 03:37:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.267 03:37:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.267 03:37:02 -- common/autotest_common.sh@10 -- # set +x 00:04:44.267 ************************************ 00:04:44.267 START TEST no_shrink_alloc 00:04:44.267 ************************************ 00:04:44.267 03:37:02 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:44.267 03:37:02 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:44.267 03:37:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.267 03:37:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.267 03:37:02 -- setup/hugepages.sh@51 -- # shift 00:04:44.267 03:37:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.267 03:37:02 -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.267 03:37:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.267 03:37:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.267 03:37:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.267 03:37:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.267 03:37:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.267 03:37:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.267 03:37:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.267 03:37:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.267 03:37:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.267 03:37:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.267 03:37:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.267 03:37:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.267 03:37:02 -- setup/hugepages.sh@73 -- # return 0 00:04:44.267 03:37:02 -- setup/hugepages.sh@198 -- # setup output 00:04:44.267 03:37:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.267 03:37:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.200 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.200 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.200 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.201 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.201 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.201 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.201 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.201 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.201 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.201 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.201 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.201 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.201 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.201 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.201 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.201 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.201 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.463 03:37:04 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:45.463 03:37:04 -- setup/hugepages.sh@89 -- # local node 00:04:45.463 03:37:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.463 03:37:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.463 03:37:04 -- setup/hugepages.sh@92 -- # local surp 00:04:45.463 03:37:04 -- setup/hugepages.sh@93 -- # local resv 00:04:45.463 03:37:04 -- setup/hugepages.sh@94 -- # local anon 00:04:45.463 03:37:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.463 03:37:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.463 03:37:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.463 03:37:04 -- setup/common.sh@18 -- # local node= 00:04:45.463 03:37:04 -- setup/common.sh@19 -- # local var val 00:04:45.463 03:37:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.463 03:37:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.463 03:37:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.463 03:37:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.463 03:37:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.463 03:37:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43276656 kB' 'MemAvailable: 46786296 kB' 'Buffers: 2704 kB' 'Cached: 12754400 kB' 'SwapCached: 0 kB' 'Active: 9769620 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375268 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522276 kB' 'Mapped: 205708 kB' 'Shmem: 8856200 kB' 'KReclaimable: 204720 kB' 'Slab: 580848 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376128 kB' 'KernelStack: 12816 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10493236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.463 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.463 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.464 03:37:04 -- setup/common.sh@33 -- # echo 0 00:04:45.464 03:37:04 -- setup/common.sh@33 -- # return 0 00:04:45.464 03:37:04 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.464 03:37:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.464 03:37:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.464 03:37:04 -- setup/common.sh@18 -- # local node= 00:04:45.464 03:37:04 -- setup/common.sh@19 -- # local var val 00:04:45.464 03:37:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.464 03:37:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.464 03:37:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.464 03:37:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.464 03:37:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.464 03:37:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43281088 kB' 'MemAvailable: 46790728 kB' 'Buffers: 2704 kB' 'Cached: 12754404 kB' 'SwapCached: 0 kB' 'Active: 9770224 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375872 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522840 kB' 'Mapped: 205784 kB' 'Shmem: 8856204 kB' 'KReclaimable: 204720 kB' 'Slab: 580856 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376136 kB' 'KernelStack: 12816 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10493248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.464 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.464 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.465 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.465 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.465 03:37:04 -- setup/common.sh@33 -- # echo 0 00:04:45.465 03:37:04 -- setup/common.sh@33 -- # return 0 00:04:45.465 03:37:04 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.466 03:37:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.466 03:37:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.466 03:37:04 -- setup/common.sh@18 -- # local node= 00:04:45.466 03:37:04 -- setup/common.sh@19 -- # local var val 00:04:45.466 03:37:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.466 03:37:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.466 03:37:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.466 03:37:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.466 03:37:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.466 03:37:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43280588 kB' 'MemAvailable: 46790228 kB' 'Buffers: 2704 kB' 'Cached: 12754416 kB' 'SwapCached: 0 kB' 'Active: 9769844 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375492 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522436 kB' 'Mapped: 205680 kB' 'Shmem: 8856216 kB' 'KReclaimable: 204720 kB' 'Slab: 580856 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376136 kB' 'KernelStack: 12800 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10493264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.466 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.466 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.467 03:37:04 -- setup/common.sh@33 -- # echo 0 00:04:45.467 03:37:04 -- setup/common.sh@33 -- # return 0 00:04:45.467 03:37:04 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.467 03:37:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.467 nr_hugepages=1024 00:04:45.467 03:37:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.467 resv_hugepages=0 00:04:45.467 03:37:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.467 surplus_hugepages=0 00:04:45.467 03:37:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.467 anon_hugepages=0 00:04:45.467 03:37:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.467 03:37:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.467 03:37:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.467 03:37:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.467 03:37:04 -- setup/common.sh@18 -- # local node= 00:04:45.467 03:37:04 -- setup/common.sh@19 -- # local var val 00:04:45.467 03:37:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.467 03:37:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.467 03:37:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.467 03:37:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.467 03:37:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.467 03:37:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43279584 kB' 'MemAvailable: 46789224 kB' 'Buffers: 2704 kB' 'Cached: 12754428 kB' 'SwapCached: 0 kB' 'Active: 9769860 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375508 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522392 kB' 'Mapped: 205680 kB' 'Shmem: 8856228 kB' 'KReclaimable: 204720 kB' 'Slab: 580856 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376136 kB' 'KernelStack: 12784 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10493276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.467 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.467 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.468 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.468 03:37:04 -- setup/common.sh@33 -- # echo 1024 00:04:45.468 03:37:04 -- setup/common.sh@33 -- # return 0 00:04:45.468 03:37:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.468 03:37:04 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.468 03:37:04 -- setup/hugepages.sh@27 -- # local node 00:04:45.468 03:37:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.468 03:37:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.468 03:37:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.468 03:37:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.468 03:37:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.468 03:37:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.468 03:37:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.468 03:37:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.468 03:37:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.468 03:37:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.468 03:37:04 -- setup/common.sh@18 -- # local node=0 00:04:45.468 03:37:04 -- setup/common.sh@19 -- # local var val 00:04:45.468 03:37:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.468 03:37:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.468 03:37:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.468 03:37:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.468 03:37:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.468 03:37:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.468 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25265836 kB' 'MemUsed: 7564048 kB' 'SwapCached: 0 kB' 'Active: 4160608 kB' 'Inactive: 108416 kB' 'Active(anon): 4049720 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4031156 kB' 'Mapped: 35664 kB' 'AnonPages: 241052 kB' 'Shmem: 3811852 kB' 'KernelStack: 7880 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98916 kB' 'Slab: 322060 kB' 'SReclaimable: 98916 kB' 'SUnreclaim: 223144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # continue 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.469 03:37:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.469 03:37:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.469 03:37:04 -- setup/common.sh@33 -- # echo 0 00:04:45.469 03:37:04 -- setup/common.sh@33 -- # return 0 00:04:45.469 03:37:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.469 03:37:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.469 03:37:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.469 03:37:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.469 03:37:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.469 node0=1024 expecting 1024 00:04:45.469 03:37:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.469 03:37:04 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:45.469 03:37:04 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:45.469 03:37:04 -- setup/hugepages.sh@202 -- # setup output 00:04:45.469 03:37:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.469 03:37:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.848 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.848 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.848 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.848 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.848 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.848 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.848 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.848 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.848 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.848 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.848 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.848 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.848 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.848 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.848 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.848 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.848 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.848 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:46.848 03:37:05 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:46.848 03:37:05 -- setup/hugepages.sh@89 -- # local node 00:04:46.848 03:37:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.848 03:37:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.848 03:37:05 -- setup/hugepages.sh@92 -- # local surp 00:04:46.848 03:37:05 -- setup/hugepages.sh@93 -- # local resv 00:04:46.848 03:37:05 -- setup/hugepages.sh@94 -- # local anon 00:04:46.848 03:37:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.848 03:37:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.848 03:37:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.848 03:37:05 -- setup/common.sh@18 -- # local node= 00:04:46.848 03:37:05 -- setup/common.sh@19 -- # local var val 00:04:46.848 03:37:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.848 03:37:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.848 03:37:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.848 03:37:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.848 03:37:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.848 03:37:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.848 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.848 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.848 03:37:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43284540 kB' 'MemAvailable: 46794180 kB' 'Buffers: 2704 kB' 'Cached: 12754480 kB' 'SwapCached: 0 kB' 'Active: 9770360 kB' 'Inactive: 3506552 kB' 'Active(anon): 9376008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522920 kB' 'Mapped: 205708 kB' 'Shmem: 8856280 kB' 'KReclaimable: 204720 kB' 'Slab: 580764 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376044 kB' 'KernelStack: 12784 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10493448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:46.848 03:37:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.848 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.849 03:37:05 -- setup/common.sh@33 -- # echo 0 00:04:46.849 03:37:05 -- setup/common.sh@33 -- # return 0 00:04:46.849 03:37:05 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.849 03:37:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.849 03:37:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.849 03:37:05 -- setup/common.sh@18 -- # local node= 00:04:46.849 03:37:05 -- setup/common.sh@19 -- # local var val 00:04:46.849 03:37:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.849 03:37:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.849 03:37:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.849 03:37:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.849 03:37:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.849 03:37:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.849 03:37:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43296748 kB' 'MemAvailable: 46806388 kB' 'Buffers: 2704 kB' 'Cached: 12754484 kB' 'SwapCached: 0 kB' 'Active: 9770244 kB' 'Inactive: 3506552 kB' 'Active(anon): 9375892 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522764 kB' 'Mapped: 205708 kB' 'Shmem: 8856284 kB' 'KReclaimable: 204720 kB' 'Slab: 580764 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 376044 kB' 'KernelStack: 12752 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10493460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.849 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.849 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.850 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.850 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.850 03:37:05 -- setup/common.sh@33 -- # echo 0 00:04:46.851 03:37:05 -- setup/common.sh@33 -- # return 0 00:04:46.851 03:37:05 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.851 03:37:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.851 03:37:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.851 03:37:05 -- setup/common.sh@18 -- # local node= 00:04:46.851 03:37:05 -- setup/common.sh@19 -- # local var val 00:04:46.851 03:37:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.851 03:37:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.851 03:37:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.851 03:37:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.851 03:37:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.851 03:37:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43300628 kB' 'MemAvailable: 46810268 kB' 'Buffers: 2704 kB' 'Cached: 12754496 kB' 'SwapCached: 0 kB' 'Active: 9770376 kB' 'Inactive: 3506552 kB' 'Active(anon): 9376024 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523092 kB' 'Mapped: 205608 kB' 'Shmem: 8856296 kB' 'KReclaimable: 204720 kB' 'Slab: 580712 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 375992 kB' 'KernelStack: 12928 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10496136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.851 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.851 03:37:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.852 03:37:05 -- setup/common.sh@33 -- # echo 0 00:04:46.852 03:37:05 -- setup/common.sh@33 -- # return 0 00:04:46.852 03:37:05 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.852 03:37:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.852 nr_hugepages=1024 00:04:46.852 03:37:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.852 resv_hugepages=0 00:04:46.852 03:37:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.852 surplus_hugepages=0 00:04:46.852 03:37:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.852 anon_hugepages=0 00:04:46.852 03:37:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.852 03:37:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.852 03:37:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.852 03:37:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.852 03:37:05 -- setup/common.sh@18 -- # local node= 00:04:46.852 03:37:05 -- setup/common.sh@19 -- # local var val 00:04:46.852 03:37:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.852 03:37:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.852 03:37:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.852 03:37:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.852 03:37:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.852 03:37:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43301572 kB' 'MemAvailable: 46811212 kB' 'Buffers: 2704 kB' 'Cached: 12754508 kB' 'SwapCached: 0 kB' 'Active: 9771036 kB' 'Inactive: 3506552 kB' 'Active(anon): 9376684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523672 kB' 'Mapped: 205616 kB' 'Shmem: 8856308 kB' 'KReclaimable: 204720 kB' 'Slab: 580712 kB' 'SReclaimable: 204720 kB' 'SUnreclaim: 375992 kB' 'KernelStack: 13264 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10497172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196932 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1922652 kB' 'DirectMap2M: 15822848 kB' 'DirectMap1G: 51380224 kB' 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.852 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.852 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.853 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.853 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.112 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.112 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.112 03:37:05 -- setup/common.sh@33 -- # echo 1024 00:04:47.112 03:37:05 -- setup/common.sh@33 -- # return 0 00:04:47.112 03:37:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.112 03:37:05 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.112 03:37:05 -- setup/hugepages.sh@27 -- # local node 00:04:47.112 03:37:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.112 03:37:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.112 03:37:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.112 03:37:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:47.112 03:37:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.112 03:37:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.113 03:37:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.113 03:37:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.113 03:37:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.113 03:37:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.113 03:37:05 -- setup/common.sh@18 -- # local node=0 00:04:47.113 03:37:05 -- setup/common.sh@19 -- # local var val 00:04:47.113 03:37:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.113 03:37:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.113 03:37:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.113 03:37:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.113 03:37:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.113 03:37:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25284724 kB' 'MemUsed: 7545160 kB' 'SwapCached: 0 kB' 'Active: 4161288 kB' 'Inactive: 108416 kB' 'Active(anon): 4050400 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4031216 kB' 'Mapped: 35660 kB' 'AnonPages: 241248 kB' 'Shmem: 3811912 kB' 'KernelStack: 7960 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98916 kB' 'Slab: 321988 kB' 'SReclaimable: 98916 kB' 'SUnreclaim: 223072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.113 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.113 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.114 03:37:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.114 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.114 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.114 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.114 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.114 03:37:05 -- setup/common.sh@32 -- # continue 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.114 03:37:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.114 03:37:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.114 03:37:05 -- setup/common.sh@33 -- # echo 0 00:04:47.114 03:37:05 -- setup/common.sh@33 -- # return 0 00:04:47.114 03:37:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.114 03:37:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.114 03:37:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.114 03:37:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.114 03:37:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.114 node0=1024 expecting 1024 00:04:47.114 03:37:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.114 00:04:47.114 real 0m2.830s 00:04:47.114 user 0m1.165s 00:04:47.114 sys 0m1.593s 00:04:47.114 03:37:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.114 03:37:05 -- common/autotest_common.sh@10 -- # set +x 00:04:47.114 ************************************ 00:04:47.114 END TEST no_shrink_alloc 00:04:47.114 ************************************ 00:04:47.114 03:37:05 -- setup/hugepages.sh@217 -- # clear_hp 00:04:47.114 03:37:05 -- setup/hugepages.sh@37 -- # local node hp 00:04:47.114 03:37:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.114 03:37:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.114 03:37:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.114 03:37:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.114 03:37:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.114 03:37:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.114 03:37:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.114 03:37:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.114 03:37:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.114 03:37:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.114 03:37:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:47.114 03:37:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:47.114 00:04:47.114 real 0m11.427s 00:04:47.114 user 0m4.472s 00:04:47.114 sys 0m5.913s 00:04:47.114 03:37:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.114 03:37:05 -- common/autotest_common.sh@10 -- # set +x 00:04:47.114 ************************************ 00:04:47.114 END TEST hugepages 00:04:47.114 ************************************ 00:04:47.114 03:37:05 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:47.114 03:37:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.114 03:37:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.114 03:37:05 -- common/autotest_common.sh@10 -- # set +x 00:04:47.114 ************************************ 00:04:47.114 START TEST driver 00:04:47.114 ************************************ 00:04:47.114 03:37:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:47.114 * Looking for test storage... 00:04:47.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.114 03:37:05 -- setup/driver.sh@68 -- # setup reset 00:04:47.114 03:37:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.114 03:37:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.643 03:37:08 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:49.643 03:37:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.643 03:37:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.643 03:37:08 -- common/autotest_common.sh@10 -- # set +x 00:04:49.643 ************************************ 00:04:49.643 START TEST guess_driver 00:04:49.643 ************************************ 00:04:49.643 03:37:08 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:49.643 03:37:08 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:49.643 03:37:08 -- setup/driver.sh@47 -- # local fail=0 00:04:49.643 03:37:08 -- setup/driver.sh@49 -- # pick_driver 00:04:49.643 03:37:08 -- setup/driver.sh@36 -- # vfio 00:04:49.643 03:37:08 -- setup/driver.sh@21 -- # local iommu_grups 00:04:49.643 03:37:08 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:49.643 03:37:08 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:49.643 03:37:08 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:49.643 03:37:08 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:49.643 03:37:08 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:49.643 03:37:08 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:49.643 03:37:08 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:49.643 03:37:08 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:49.643 03:37:08 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:49.643 03:37:08 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:49.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:49.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:49.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:49.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:49.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:49.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:49.643 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:49.643 03:37:08 -- setup/driver.sh@30 -- # return 0 00:04:49.643 03:37:08 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:49.643 03:37:08 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:49.643 03:37:08 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:49.643 03:37:08 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:49.643 Looking for driver=vfio-pci 00:04:49.643 03:37:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.643 03:37:08 -- setup/driver.sh@45 -- # setup output config 00:04:49.643 03:37:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.643 03:37:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.016 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.016 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.016 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.016 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.016 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.016 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.016 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.016 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.016 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.016 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.016 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.016 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.016 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.016 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.017 03:37:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.017 03:37:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.017 03:37:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.949 03:37:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.949 03:37:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.949 03:37:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.949 03:37:10 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:51.949 03:37:10 -- setup/driver.sh@65 -- # setup reset 00:04:51.949 03:37:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.949 03:37:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.486 00:04:54.486 real 0m4.822s 00:04:54.486 user 0m1.089s 00:04:54.486 sys 0m1.891s 00:04:54.486 03:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.486 03:37:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.486 ************************************ 00:04:54.486 END TEST guess_driver 00:04:54.486 ************************************ 00:04:54.486 00:04:54.486 real 0m7.357s 00:04:54.486 user 0m1.662s 00:04:54.486 sys 0m2.885s 00:04:54.486 03:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.486 03:37:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.486 ************************************ 00:04:54.486 END TEST driver 00:04:54.486 ************************************ 00:04:54.486 03:37:13 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.486 03:37:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.486 03:37:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.486 03:37:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.486 ************************************ 00:04:54.486 START TEST devices 00:04:54.486 ************************************ 00:04:54.486 03:37:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:54.486 * Looking for test storage... 00:04:54.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:54.486 03:37:13 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:54.486 03:37:13 -- setup/devices.sh@192 -- # setup reset 00:04:54.487 03:37:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.487 03:37:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.868 03:37:14 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:55.868 03:37:14 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:55.868 03:37:14 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:55.868 03:37:14 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:55.868 03:37:14 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:55.868 03:37:14 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:55.868 03:37:14 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:55.868 03:37:14 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.868 03:37:14 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:55.868 03:37:14 -- setup/devices.sh@196 -- # blocks=() 00:04:55.868 03:37:14 -- setup/devices.sh@196 -- # declare -a blocks 00:04:55.868 03:37:14 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:55.868 03:37:14 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:55.868 03:37:14 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:55.868 03:37:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:55.868 03:37:14 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:55.868 03:37:14 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:55.868 03:37:14 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:55.868 03:37:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:55.868 03:37:14 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:55.868 03:37:14 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:55.868 03:37:14 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:55.868 No valid GPT data, bailing 00:04:55.868 03:37:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:55.868 03:37:14 -- scripts/common.sh@393 -- # pt= 00:04:55.868 03:37:14 -- scripts/common.sh@394 -- # return 1 00:04:55.868 03:37:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:55.868 03:37:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:55.868 03:37:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:55.868 03:37:14 -- setup/common.sh@80 -- # echo 1000204886016 00:04:55.868 03:37:14 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:55.868 03:37:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:55.868 03:37:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:55.868 03:37:14 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:55.868 03:37:14 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:55.868 03:37:14 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:55.868 03:37:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.868 03:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.868 03:37:14 -- common/autotest_common.sh@10 -- # set +x 00:04:55.868 ************************************ 00:04:55.868 START TEST nvme_mount 00:04:55.868 ************************************ 00:04:55.868 03:37:14 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:55.868 03:37:14 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:55.868 03:37:14 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:55.868 03:37:14 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.868 03:37:14 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.868 03:37:14 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:55.868 03:37:14 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.868 03:37:14 -- setup/common.sh@40 -- # local part_no=1 00:04:55.868 03:37:14 -- setup/common.sh@41 -- # local size=1073741824 00:04:55.868 03:37:14 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.868 03:37:14 -- setup/common.sh@44 -- # parts=() 00:04:55.868 03:37:14 -- setup/common.sh@44 -- # local parts 00:04:55.868 03:37:14 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.868 03:37:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.868 03:37:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.868 03:37:14 -- setup/common.sh@46 -- # (( part++ )) 00:04:55.868 03:37:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.868 03:37:14 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:55.868 03:37:14 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.868 03:37:14 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:57.252 Creating new GPT entries in memory. 00:04:57.252 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.252 other utilities. 00:04:57.252 03:37:15 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.252 03:37:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.252 03:37:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.252 03:37:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.252 03:37:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:58.191 Creating new GPT entries in memory. 00:04:58.191 The operation has completed successfully. 00:04:58.191 03:37:16 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.191 03:37:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.191 03:37:16 -- setup/common.sh@62 -- # wait 2247813 00:04:58.191 03:37:16 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.191 03:37:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:58.191 03:37:16 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.191 03:37:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:58.191 03:37:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:58.191 03:37:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.191 03:37:16 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.191 03:37:16 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:58.191 03:37:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:58.191 03:37:16 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.191 03:37:16 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.191 03:37:16 -- setup/devices.sh@53 -- # local found=0 00:04:58.191 03:37:16 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.191 03:37:16 -- setup/devices.sh@56 -- # : 00:04:58.191 03:37:16 -- setup/devices.sh@59 -- # local pci status 00:04:58.191 03:37:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.191 03:37:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:58.191 03:37:16 -- setup/devices.sh@47 -- # setup output config 00:04:58.191 03:37:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.191 03:37:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:59.129 03:37:17 -- setup/devices.sh@63 -- # found=1 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.129 03:37:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.129 03:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.389 03:37:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.389 03:37:18 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.389 03:37:18 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.389 03:37:18 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.389 03:37:18 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.389 03:37:18 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:59.389 03:37:18 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.389 03:37:18 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.389 03:37:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.389 03:37:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.389 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.389 03:37:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.389 03:37:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.648 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.648 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.648 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.648 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.648 03:37:18 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:59.648 03:37:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:59.648 03:37:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.648 03:37:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:59.648 03:37:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:59.648 03:37:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.648 03:37:18 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.648 03:37:18 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:59.648 03:37:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:59.648 03:37:18 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.648 03:37:18 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.648 03:37:18 -- setup/devices.sh@53 -- # local found=0 00:04:59.648 03:37:18 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.648 03:37:18 -- setup/devices.sh@56 -- # : 00:04:59.648 03:37:18 -- setup/devices.sh@59 -- # local pci status 00:04:59.648 03:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.648 03:37:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:59.648 03:37:18 -- setup/devices.sh@47 -- # setup output config 00:04:59.648 03:37:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.648 03:37:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.025 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.025 03:37:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:01.026 03:37:19 -- setup/devices.sh@63 -- # found=1 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.026 03:37:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:01.026 03:37:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.026 03:37:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.026 03:37:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.026 03:37:19 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.026 03:37:19 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:01.026 03:37:19 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:01.026 03:37:19 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:01.026 03:37:19 -- setup/devices.sh@50 -- # local mount_point= 00:05:01.026 03:37:19 -- setup/devices.sh@51 -- # local test_file= 00:05:01.026 03:37:19 -- setup/devices.sh@53 -- # local found=0 00:05:01.026 03:37:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.026 03:37:19 -- setup/devices.sh@59 -- # local pci status 00:05:01.026 03:37:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.026 03:37:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:01.026 03:37:19 -- setup/devices.sh@47 -- # setup output config 00:05:01.026 03:37:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.026 03:37:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.962 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.962 03:37:20 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:01.962 03:37:20 -- setup/devices.sh@63 -- # found=1 00:05:01.962 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.962 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.962 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.962 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.962 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.962 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:01.962 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:02.220 03:37:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.220 03:37:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.220 03:37:21 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.220 03:37:21 -- setup/devices.sh@68 -- # return 0 00:05:02.220 03:37:21 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:02.220 03:37:21 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.220 03:37:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.220 03:37:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.220 03:37:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.220 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.220 00:05:02.220 real 0m6.347s 00:05:02.220 user 0m1.513s 00:05:02.220 sys 0m2.430s 00:05:02.220 03:37:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.220 03:37:21 -- common/autotest_common.sh@10 -- # set +x 00:05:02.220 ************************************ 00:05:02.220 END TEST nvme_mount 00:05:02.220 ************************************ 00:05:02.220 03:37:21 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:02.220 03:37:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.220 03:37:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.220 03:37:21 -- common/autotest_common.sh@10 -- # set +x 00:05:02.220 ************************************ 00:05:02.220 START TEST dm_mount 00:05:02.220 ************************************ 00:05:02.220 03:37:21 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:02.220 03:37:21 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:02.220 03:37:21 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:02.220 03:37:21 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:02.220 03:37:21 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:02.220 03:37:21 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:02.220 03:37:21 -- setup/common.sh@40 -- # local part_no=2 00:05:02.220 03:37:21 -- setup/common.sh@41 -- # local size=1073741824 00:05:02.220 03:37:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:02.220 03:37:21 -- setup/common.sh@44 -- # parts=() 00:05:02.220 03:37:21 -- setup/common.sh@44 -- # local parts 00:05:02.220 03:37:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:02.220 03:37:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.220 03:37:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.220 03:37:21 -- setup/common.sh@46 -- # (( part++ )) 00:05:02.220 03:37:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.220 03:37:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.220 03:37:21 -- setup/common.sh@46 -- # (( part++ )) 00:05:02.220 03:37:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.220 03:37:21 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:02.220 03:37:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:02.220 03:37:21 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:03.600 Creating new GPT entries in memory. 00:05:03.600 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:03.600 other utilities. 00:05:03.600 03:37:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:03.600 03:37:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.600 03:37:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.600 03:37:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.600 03:37:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:04.537 Creating new GPT entries in memory. 00:05:04.537 The operation has completed successfully. 00:05:04.537 03:37:23 -- setup/common.sh@57 -- # (( part++ )) 00:05:04.537 03:37:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.537 03:37:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.537 03:37:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.537 03:37:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:05.474 The operation has completed successfully. 00:05:05.474 03:37:24 -- setup/common.sh@57 -- # (( part++ )) 00:05:05.474 03:37:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.474 03:37:24 -- setup/common.sh@62 -- # wait 2250273 00:05:05.474 03:37:24 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:05.474 03:37:24 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.474 03:37:24 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.474 03:37:24 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:05.474 03:37:24 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:05.474 03:37:24 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.474 03:37:24 -- setup/devices.sh@161 -- # break 00:05:05.474 03:37:24 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.474 03:37:24 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:05.474 03:37:24 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:05.474 03:37:24 -- setup/devices.sh@166 -- # dm=dm-0 00:05:05.474 03:37:24 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:05.474 03:37:24 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:05.474 03:37:24 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.474 03:37:24 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:05.474 03:37:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.474 03:37:24 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.474 03:37:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:05.474 03:37:24 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.474 03:37:24 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.474 03:37:24 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:05.474 03:37:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:05.474 03:37:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.474 03:37:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:05.474 03:37:24 -- setup/devices.sh@53 -- # local found=0 00:05:05.474 03:37:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:05.474 03:37:24 -- setup/devices.sh@56 -- # : 00:05:05.474 03:37:24 -- setup/devices.sh@59 -- # local pci status 00:05:05.474 03:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.474 03:37:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:05.474 03:37:24 -- setup/devices.sh@47 -- # setup output config 00:05:05.474 03:37:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.474 03:37:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.410 03:37:25 -- setup/devices.sh@63 -- # found=1 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.410 03:37:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.410 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.670 03:37:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.670 03:37:25 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.670 03:37:25 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.670 03:37:25 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.670 03:37:25 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.670 03:37:25 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.670 03:37:25 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.670 03:37:25 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:06.670 03:37:25 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.670 03:37:25 -- setup/devices.sh@50 -- # local mount_point= 00:05:06.670 03:37:25 -- setup/devices.sh@51 -- # local test_file= 00:05:06.670 03:37:25 -- setup/devices.sh@53 -- # local found=0 00:05:06.670 03:37:25 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.670 03:37:25 -- setup/devices.sh@59 -- # local pci status 00:05:06.670 03:37:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.670 03:37:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:06.670 03:37:25 -- setup/devices.sh@47 -- # setup output config 00:05:06.670 03:37:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.670 03:37:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:08.048 03:37:26 -- setup/devices.sh@63 -- # found=1 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.048 03:37:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.048 03:37:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.048 03:37:26 -- setup/devices.sh@68 -- # return 0 00:05:08.048 03:37:26 -- setup/devices.sh@187 -- # cleanup_dm 00:05:08.048 03:37:26 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.048 03:37:26 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.048 03:37:26 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:08.048 03:37:26 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:08.048 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.048 03:37:26 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:08.048 00:05:08.048 real 0m5.722s 00:05:08.048 user 0m1.020s 00:05:08.048 sys 0m1.575s 00:05:08.048 03:37:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.048 03:37:26 -- common/autotest_common.sh@10 -- # set +x 00:05:08.048 ************************************ 00:05:08.048 END TEST dm_mount 00:05:08.048 ************************************ 00:05:08.048 03:37:26 -- setup/devices.sh@1 -- # cleanup 00:05:08.048 03:37:26 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.048 03:37:26 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.048 03:37:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.048 03:37:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.048 03:37:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.306 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.306 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.306 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.306 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.306 03:37:27 -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.306 03:37:27 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.306 03:37:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.306 03:37:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.306 03:37:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.306 03:37:27 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.306 03:37:27 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.306 00:05:08.306 real 0m13.908s 00:05:08.306 user 0m3.133s 00:05:08.306 sys 0m5.004s 00:05:08.306 03:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.306 03:37:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.306 ************************************ 00:05:08.306 END TEST devices 00:05:08.306 ************************************ 00:05:08.306 00:05:08.306 real 0m43.194s 00:05:08.306 user 0m12.477s 00:05:08.306 sys 0m19.062s 00:05:08.306 03:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.306 03:37:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.306 ************************************ 00:05:08.306 END TEST setup.sh 00:05:08.306 ************************************ 00:05:08.306 03:37:27 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:09.681 Hugepages 00:05:09.681 node hugesize free / total 00:05:09.681 node0 1048576kB 0 / 0 00:05:09.681 node0 2048kB 2048 / 2048 00:05:09.681 node1 1048576kB 0 / 0 00:05:09.681 node1 2048kB 0 / 0 00:05:09.681 00:05:09.681 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.681 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:09.681 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:09.681 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:09.681 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:09.681 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:09.681 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:09.681 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:09.681 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:09.681 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:09.681 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:09.681 03:37:28 -- spdk/autotest.sh@141 -- # uname -s 00:05:09.681 03:37:28 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:09.681 03:37:28 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:09.681 03:37:28 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.616 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.616 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.616 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.616 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.616 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.616 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.876 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.876 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.876 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:11.815 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.815 03:37:30 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:13.231 03:37:31 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:13.231 03:37:31 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:13.231 03:37:31 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:13.231 03:37:31 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:13.231 03:37:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.231 03:37:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.231 03:37:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.231 03:37:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.231 03:37:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.231 03:37:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:13.231 03:37:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:13.231 03:37:31 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.169 Waiting for block devices as requested 00:05:14.169 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:14.169 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:14.169 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.429 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:14.429 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:14.429 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:14.429 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:14.693 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:14.693 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:14.693 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:14.693 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.953 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:14.953 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:14.953 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:14.953 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:15.213 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:15.213 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:15.213 03:37:34 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:15.213 03:37:34 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:15.213 03:37:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:15.213 03:37:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:15.213 03:37:34 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:15.213 03:37:34 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:15.213 03:37:34 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:05:15.213 03:37:34 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:15.213 03:37:34 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:15.213 03:37:34 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:15.213 03:37:34 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:15.213 03:37:34 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:15.213 03:37:34 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:15.213 03:37:34 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:15.213 03:37:34 -- common/autotest_common.sh@1542 -- # continue 00:05:15.213 03:37:34 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:15.213 03:37:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:15.213 03:37:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.473 03:37:34 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:15.473 03:37:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.473 03:37:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.473 03:37:34 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.412 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.412 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.412 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.412 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.412 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.412 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.412 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.412 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:16.412 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.412 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.412 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.671 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.671 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.671 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.671 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.671 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:17.611 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.611 03:37:36 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:17.611 03:37:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:17.611 03:37:36 -- common/autotest_common.sh@10 -- # set +x 00:05:17.611 03:37:36 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:17.611 03:37:36 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:17.611 03:37:36 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.611 03:37:36 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:17.611 03:37:36 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:17.611 03:37:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:17.611 03:37:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:17.611 03:37:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:17.611 03:37:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.611 03:37:36 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.611 03:37:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:17.611 03:37:36 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:17.611 03:37:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:17.611 03:37:36 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:17.611 03:37:36 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:17.611 03:37:36 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:17.611 03:37:36 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:17.611 03:37:36 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:17.611 03:37:36 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:17.611 03:37:36 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:17.611 03:37:36 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2255571 00:05:17.611 03:37:36 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.611 03:37:36 -- common/autotest_common.sh@1583 -- # waitforlisten 2255571 00:05:17.611 03:37:36 -- common/autotest_common.sh@819 -- # '[' -z 2255571 ']' 00:05:17.611 03:37:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.611 03:37:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.611 03:37:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.611 03:37:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.611 03:37:36 -- common/autotest_common.sh@10 -- # set +x 00:05:17.871 [2024-07-14 03:37:36.566972] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:17.871 [2024-07-14 03:37:36.567072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255571 ] 00:05:17.871 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.871 [2024-07-14 03:37:36.625554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.871 [2024-07-14 03:37:36.711751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.871 [2024-07-14 03:37:36.711946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.807 03:37:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:18.807 03:37:37 -- common/autotest_common.sh@852 -- # return 0 00:05:18.807 03:37:37 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:18.807 03:37:37 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:18.807 03:37:37 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:22.100 nvme0n1 00:05:22.100 03:37:40 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:22.100 [2024-07-14 03:37:40.822447] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:22.100 [2024-07-14 03:37:40.822493] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:22.100 request: 00:05:22.100 { 00:05:22.100 "nvme_ctrlr_name": "nvme0", 00:05:22.100 "password": "test", 00:05:22.100 "method": "bdev_nvme_opal_revert", 00:05:22.100 "req_id": 1 00:05:22.100 } 00:05:22.100 Got JSON-RPC error response 00:05:22.100 response: 00:05:22.100 { 00:05:22.100 "code": -32603, 00:05:22.100 "message": "Internal error" 00:05:22.100 } 00:05:22.100 03:37:40 -- common/autotest_common.sh@1589 -- # true 00:05:22.100 03:37:40 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:22.100 03:37:40 -- common/autotest_common.sh@1593 -- # killprocess 2255571 00:05:22.100 03:37:40 -- common/autotest_common.sh@926 -- # '[' -z 2255571 ']' 00:05:22.100 03:37:40 -- common/autotest_common.sh@930 -- # kill -0 2255571 00:05:22.100 03:37:40 -- common/autotest_common.sh@931 -- # uname 00:05:22.100 03:37:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:22.100 03:37:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2255571 00:05:22.100 03:37:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:22.100 03:37:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:22.100 03:37:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2255571' 00:05:22.100 killing process with pid 2255571 00:05:22.100 03:37:40 -- common/autotest_common.sh@945 -- # kill 2255571 00:05:22.100 03:37:40 -- common/autotest_common.sh@950 -- # wait 2255571 00:05:24.035 03:37:42 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:24.035 03:37:42 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:24.035 03:37:42 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:24.035 03:37:42 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:24.035 03:37:42 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:24.035 03:37:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:24.035 03:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:24.035 03:37:42 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:24.035 03:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.035 03:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.035 03:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:24.035 ************************************ 00:05:24.035 START TEST env 00:05:24.035 ************************************ 00:05:24.035 03:37:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:24.035 * Looking for test storage... 00:05:24.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:24.035 03:37:42 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.035 03:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.035 03:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.035 03:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:24.035 ************************************ 00:05:24.035 START TEST env_memory 00:05:24.035 ************************************ 00:05:24.035 03:37:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.035 00:05:24.035 00:05:24.035 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.035 http://cunit.sourceforge.net/ 00:05:24.035 00:05:24.035 00:05:24.035 Suite: memory 00:05:24.036 Test: alloc and free memory map ...[2024-07-14 03:37:42.711982] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:24.036 passed 00:05:24.036 Test: mem map translation ...[2024-07-14 03:37:42.731910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:24.036 [2024-07-14 03:37:42.731936] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:24.036 [2024-07-14 03:37:42.731986] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:24.036 [2024-07-14 03:37:42.731997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:24.036 passed 00:05:24.036 Test: mem map registration ...[2024-07-14 03:37:42.772456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:24.036 [2024-07-14 03:37:42.772476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:24.036 passed 00:05:24.036 Test: mem map adjacent registrations ...passed 00:05:24.036 00:05:24.036 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.036 suites 1 1 n/a 0 0 00:05:24.036 tests 4 4 4 0 0 00:05:24.036 asserts 152 152 152 0 n/a 00:05:24.036 00:05:24.036 Elapsed time = 0.142 seconds 00:05:24.036 00:05:24.036 real 0m0.151s 00:05:24.036 user 0m0.144s 00:05:24.036 sys 0m0.007s 00:05:24.036 03:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.036 03:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:24.036 ************************************ 00:05:24.036 END TEST env_memory 00:05:24.036 ************************************ 00:05:24.036 03:37:42 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:24.036 03:37:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.036 03:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.036 03:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:24.036 ************************************ 00:05:24.036 START TEST env_vtophys 00:05:24.036 ************************************ 00:05:24.036 03:37:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:24.036 EAL: lib.eal log level changed from notice to debug 00:05:24.036 EAL: Detected lcore 0 as core 0 on socket 0 00:05:24.036 EAL: Detected lcore 1 as core 1 on socket 0 00:05:24.036 EAL: Detected lcore 2 as core 2 on socket 0 00:05:24.036 EAL: Detected lcore 3 as core 3 on socket 0 00:05:24.036 EAL: Detected lcore 4 as core 4 on socket 0 00:05:24.036 EAL: Detected lcore 5 as core 5 on socket 0 00:05:24.036 EAL: Detected lcore 6 as core 8 on socket 0 00:05:24.036 EAL: Detected lcore 7 as core 9 on socket 0 00:05:24.036 EAL: Detected lcore 8 as core 10 on socket 0 00:05:24.036 EAL: Detected lcore 9 as core 11 on socket 0 00:05:24.036 EAL: Detected lcore 10 as core 12 on socket 0 00:05:24.036 EAL: Detected lcore 11 as core 13 on socket 0 00:05:24.036 EAL: Detected lcore 12 as core 0 on socket 1 00:05:24.036 EAL: Detected lcore 13 as core 1 on socket 1 00:05:24.036 EAL: Detected lcore 14 as core 2 on socket 1 00:05:24.036 EAL: Detected lcore 15 as core 3 on socket 1 00:05:24.036 EAL: Detected lcore 16 as core 4 on socket 1 00:05:24.036 EAL: Detected lcore 17 as core 5 on socket 1 00:05:24.036 EAL: Detected lcore 18 as core 8 on socket 1 00:05:24.036 EAL: Detected lcore 19 as core 9 on socket 1 00:05:24.036 EAL: Detected lcore 20 as core 10 on socket 1 00:05:24.036 EAL: Detected lcore 21 as core 11 on socket 1 00:05:24.036 EAL: Detected lcore 22 as core 12 on socket 1 00:05:24.036 EAL: Detected lcore 23 as core 13 on socket 1 00:05:24.036 EAL: Detected lcore 24 as core 0 on socket 0 00:05:24.036 EAL: Detected lcore 25 as core 1 on socket 0 00:05:24.036 EAL: Detected lcore 26 as core 2 on socket 0 00:05:24.036 EAL: Detected lcore 27 as core 3 on socket 0 00:05:24.036 EAL: Detected lcore 28 as core 4 on socket 0 00:05:24.036 EAL: Detected lcore 29 as core 5 on socket 0 00:05:24.036 EAL: Detected lcore 30 as core 8 on socket 0 00:05:24.036 EAL: Detected lcore 31 as core 9 on socket 0 00:05:24.036 EAL: Detected lcore 32 as core 10 on socket 0 00:05:24.036 EAL: Detected lcore 33 as core 11 on socket 0 00:05:24.036 EAL: Detected lcore 34 as core 12 on socket 0 00:05:24.036 EAL: Detected lcore 35 as core 13 on socket 0 00:05:24.036 EAL: Detected lcore 36 as core 0 on socket 1 00:05:24.036 EAL: Detected lcore 37 as core 1 on socket 1 00:05:24.036 EAL: Detected lcore 38 as core 2 on socket 1 00:05:24.036 EAL: Detected lcore 39 as core 3 on socket 1 00:05:24.036 EAL: Detected lcore 40 as core 4 on socket 1 00:05:24.036 EAL: Detected lcore 41 as core 5 on socket 1 00:05:24.036 EAL: Detected lcore 42 as core 8 on socket 1 00:05:24.036 EAL: Detected lcore 43 as core 9 on socket 1 00:05:24.036 EAL: Detected lcore 44 as core 10 on socket 1 00:05:24.036 EAL: Detected lcore 45 as core 11 on socket 1 00:05:24.036 EAL: Detected lcore 46 as core 12 on socket 1 00:05:24.036 EAL: Detected lcore 47 as core 13 on socket 1 00:05:24.036 EAL: Maximum logical cores by configuration: 128 00:05:24.036 EAL: Detected CPU lcores: 48 00:05:24.036 EAL: Detected NUMA nodes: 2 00:05:24.036 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:24.036 EAL: Detected shared linkage of DPDK 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:24.036 EAL: Registered [vdev] bus. 00:05:24.036 EAL: bus.vdev log level changed from disabled to notice 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:24.036 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:24.036 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:24.036 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:24.036 EAL: No shared files mode enabled, IPC will be disabled 00:05:24.036 EAL: No shared files mode enabled, IPC is disabled 00:05:24.036 EAL: Bus pci wants IOVA as 'DC' 00:05:24.036 EAL: Bus vdev wants IOVA as 'DC' 00:05:24.036 EAL: Buses did not request a specific IOVA mode. 00:05:24.036 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:24.036 EAL: Selected IOVA mode 'VA' 00:05:24.036 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.036 EAL: Probing VFIO support... 00:05:24.036 EAL: IOMMU type 1 (Type 1) is supported 00:05:24.036 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:24.036 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:24.036 EAL: VFIO support initialized 00:05:24.036 EAL: Ask a virtual area of 0x2e000 bytes 00:05:24.036 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:24.036 EAL: Setting up physically contiguous memory... 00:05:24.036 EAL: Setting maximum number of open files to 524288 00:05:24.036 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:24.036 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:24.036 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:24.036 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:24.036 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.036 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:24.036 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:24.036 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.036 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:24.036 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:24.036 EAL: Hugepages will be freed exactly as allocated. 00:05:24.036 EAL: No shared files mode enabled, IPC is disabled 00:05:24.036 EAL: No shared files mode enabled, IPC is disabled 00:05:24.036 EAL: TSC frequency is ~2700000 KHz 00:05:24.037 EAL: Main lcore 0 is ready (tid=7fe8502c9a00;cpuset=[0]) 00:05:24.037 EAL: Trying to obtain current memory policy. 00:05:24.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.037 EAL: Restoring previous memory policy: 0 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was expanded by 2MB 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:24.037 EAL: Mem event callback 'spdk:(nil)' registered 00:05:24.037 00:05:24.037 00:05:24.037 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.037 http://cunit.sourceforge.net/ 00:05:24.037 00:05:24.037 00:05:24.037 Suite: components_suite 00:05:24.037 Test: vtophys_malloc_test ...passed 00:05:24.037 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:24.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.037 EAL: Restoring previous memory policy: 4 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was expanded by 4MB 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was shrunk by 4MB 00:05:24.037 EAL: Trying to obtain current memory policy. 00:05:24.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.037 EAL: Restoring previous memory policy: 4 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was expanded by 6MB 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was shrunk by 6MB 00:05:24.037 EAL: Trying to obtain current memory policy. 00:05:24.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.037 EAL: Restoring previous memory policy: 4 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was expanded by 10MB 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was shrunk by 10MB 00:05:24.037 EAL: Trying to obtain current memory policy. 00:05:24.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.037 EAL: Restoring previous memory policy: 4 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was expanded by 18MB 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was shrunk by 18MB 00:05:24.037 EAL: Trying to obtain current memory policy. 00:05:24.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.037 EAL: Restoring previous memory policy: 4 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was expanded by 34MB 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was shrunk by 34MB 00:05:24.037 EAL: Trying to obtain current memory policy. 00:05:24.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.037 EAL: Restoring previous memory policy: 4 00:05:24.037 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.037 EAL: request: mp_malloc_sync 00:05:24.037 EAL: No shared files mode enabled, IPC is disabled 00:05:24.037 EAL: Heap on socket 0 was expanded by 66MB 00:05:24.297 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.297 EAL: request: mp_malloc_sync 00:05:24.297 EAL: No shared files mode enabled, IPC is disabled 00:05:24.297 EAL: Heap on socket 0 was shrunk by 66MB 00:05:24.297 EAL: Trying to obtain current memory policy. 00:05:24.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.297 EAL: Restoring previous memory policy: 4 00:05:24.297 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.297 EAL: request: mp_malloc_sync 00:05:24.297 EAL: No shared files mode enabled, IPC is disabled 00:05:24.297 EAL: Heap on socket 0 was expanded by 130MB 00:05:24.297 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.297 EAL: request: mp_malloc_sync 00:05:24.297 EAL: No shared files mode enabled, IPC is disabled 00:05:24.297 EAL: Heap on socket 0 was shrunk by 130MB 00:05:24.297 EAL: Trying to obtain current memory policy. 00:05:24.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.297 EAL: Restoring previous memory policy: 4 00:05:24.297 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.297 EAL: request: mp_malloc_sync 00:05:24.297 EAL: No shared files mode enabled, IPC is disabled 00:05:24.297 EAL: Heap on socket 0 was expanded by 258MB 00:05:24.297 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.557 EAL: request: mp_malloc_sync 00:05:24.557 EAL: No shared files mode enabled, IPC is disabled 00:05:24.557 EAL: Heap on socket 0 was shrunk by 258MB 00:05:24.557 EAL: Trying to obtain current memory policy. 00:05:24.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.557 EAL: Restoring previous memory policy: 4 00:05:24.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.557 EAL: request: mp_malloc_sync 00:05:24.557 EAL: No shared files mode enabled, IPC is disabled 00:05:24.557 EAL: Heap on socket 0 was expanded by 514MB 00:05:24.815 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.815 EAL: request: mp_malloc_sync 00:05:24.815 EAL: No shared files mode enabled, IPC is disabled 00:05:24.815 EAL: Heap on socket 0 was shrunk by 514MB 00:05:24.815 EAL: Trying to obtain current memory policy. 00:05:24.815 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.073 EAL: Restoring previous memory policy: 4 00:05:25.073 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.073 EAL: request: mp_malloc_sync 00:05:25.073 EAL: No shared files mode enabled, IPC is disabled 00:05:25.073 EAL: Heap on socket 0 was expanded by 1026MB 00:05:25.331 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.590 EAL: request: mp_malloc_sync 00:05:25.590 EAL: No shared files mode enabled, IPC is disabled 00:05:25.590 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:25.590 passed 00:05:25.590 00:05:25.590 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.590 suites 1 1 n/a 0 0 00:05:25.590 tests 2 2 2 0 0 00:05:25.590 asserts 497 497 497 0 n/a 00:05:25.590 00:05:25.590 Elapsed time = 1.364 seconds 00:05:25.590 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.590 EAL: request: mp_malloc_sync 00:05:25.590 EAL: No shared files mode enabled, IPC is disabled 00:05:25.590 EAL: Heap on socket 0 was shrunk by 2MB 00:05:25.590 EAL: No shared files mode enabled, IPC is disabled 00:05:25.590 EAL: No shared files mode enabled, IPC is disabled 00:05:25.590 EAL: No shared files mode enabled, IPC is disabled 00:05:25.590 00:05:25.590 real 0m1.481s 00:05:25.590 user 0m0.836s 00:05:25.590 sys 0m0.611s 00:05:25.590 03:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.590 03:37:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.590 ************************************ 00:05:25.590 END TEST env_vtophys 00:05:25.590 ************************************ 00:05:25.590 03:37:44 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.590 03:37:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.590 03:37:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.590 03:37:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.590 ************************************ 00:05:25.590 START TEST env_pci 00:05:25.590 ************************************ 00:05:25.590 03:37:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.590 00:05:25.590 00:05:25.590 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.590 http://cunit.sourceforge.net/ 00:05:25.590 00:05:25.590 00:05:25.590 Suite: pci 00:05:25.590 Test: pci_hook ...[2024-07-14 03:37:44.373961] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2256606 has claimed it 00:05:25.590 EAL: Cannot find device (10000:00:01.0) 00:05:25.590 EAL: Failed to attach device on primary process 00:05:25.590 passed 00:05:25.590 00:05:25.590 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.590 suites 1 1 n/a 0 0 00:05:25.590 tests 1 1 1 0 0 00:05:25.590 asserts 25 25 25 0 n/a 00:05:25.590 00:05:25.590 Elapsed time = 0.022 seconds 00:05:25.590 00:05:25.590 real 0m0.035s 00:05:25.590 user 0m0.012s 00:05:25.590 sys 0m0.023s 00:05:25.590 03:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.590 03:37:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.590 ************************************ 00:05:25.590 END TEST env_pci 00:05:25.590 ************************************ 00:05:25.590 03:37:44 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:25.590 03:37:44 -- env/env.sh@15 -- # uname 00:05:25.590 03:37:44 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:25.590 03:37:44 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:25.590 03:37:44 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.590 03:37:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:25.590 03:37:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.590 03:37:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.590 ************************************ 00:05:25.590 START TEST env_dpdk_post_init 00:05:25.590 ************************************ 00:05:25.590 03:37:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.590 EAL: Detected CPU lcores: 48 00:05:25.590 EAL: Detected NUMA nodes: 2 00:05:25.590 EAL: Detected shared linkage of DPDK 00:05:25.590 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.590 EAL: Selected IOVA mode 'VA' 00:05:25.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.590 EAL: VFIO support initialized 00:05:25.590 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.590 EAL: Using IOMMU type 1 (Type 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:25.850 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:26.788 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:30.086 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:30.086 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:30.086 Starting DPDK initialization... 00:05:30.086 Starting SPDK post initialization... 00:05:30.086 SPDK NVMe probe 00:05:30.086 Attaching to 0000:88:00.0 00:05:30.086 Attached to 0000:88:00.0 00:05:30.086 Cleaning up... 00:05:30.086 00:05:30.086 real 0m4.443s 00:05:30.086 user 0m3.283s 00:05:30.086 sys 0m0.214s 00:05:30.086 03:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.086 03:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:30.086 ************************************ 00:05:30.086 END TEST env_dpdk_post_init 00:05:30.086 ************************************ 00:05:30.086 03:37:48 -- env/env.sh@26 -- # uname 00:05:30.086 03:37:48 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:30.086 03:37:48 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.086 03:37:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.086 03:37:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.086 03:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:30.086 ************************************ 00:05:30.086 START TEST env_mem_callbacks 00:05:30.086 ************************************ 00:05:30.086 03:37:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.086 EAL: Detected CPU lcores: 48 00:05:30.086 EAL: Detected NUMA nodes: 2 00:05:30.086 EAL: Detected shared linkage of DPDK 00:05:30.086 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.086 EAL: Selected IOVA mode 'VA' 00:05:30.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.086 EAL: VFIO support initialized 00:05:30.086 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.086 00:05:30.086 00:05:30.086 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.086 http://cunit.sourceforge.net/ 00:05:30.086 00:05:30.086 00:05:30.086 Suite: memory 00:05:30.086 Test: test ... 00:05:30.086 register 0x200000200000 2097152 00:05:30.086 malloc 3145728 00:05:30.086 register 0x200000400000 4194304 00:05:30.086 buf 0x200000500000 len 3145728 PASSED 00:05:30.086 malloc 64 00:05:30.086 buf 0x2000004fff40 len 64 PASSED 00:05:30.086 malloc 4194304 00:05:30.086 register 0x200000800000 6291456 00:05:30.086 buf 0x200000a00000 len 4194304 PASSED 00:05:30.086 free 0x200000500000 3145728 00:05:30.086 free 0x2000004fff40 64 00:05:30.086 unregister 0x200000400000 4194304 PASSED 00:05:30.086 free 0x200000a00000 4194304 00:05:30.086 unregister 0x200000800000 6291456 PASSED 00:05:30.086 malloc 8388608 00:05:30.086 register 0x200000400000 10485760 00:05:30.086 buf 0x200000600000 len 8388608 PASSED 00:05:30.086 free 0x200000600000 8388608 00:05:30.086 unregister 0x200000400000 10485760 PASSED 00:05:30.086 passed 00:05:30.086 00:05:30.086 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.086 suites 1 1 n/a 0 0 00:05:30.086 tests 1 1 1 0 0 00:05:30.086 asserts 15 15 15 0 n/a 00:05:30.086 00:05:30.086 Elapsed time = 0.005 seconds 00:05:30.086 00:05:30.086 real 0m0.050s 00:05:30.086 user 0m0.016s 00:05:30.086 sys 0m0.034s 00:05:30.086 03:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.086 03:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:30.086 ************************************ 00:05:30.086 END TEST env_mem_callbacks 00:05:30.086 ************************************ 00:05:30.086 00:05:30.086 real 0m6.349s 00:05:30.086 user 0m4.364s 00:05:30.086 sys 0m1.029s 00:05:30.086 03:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.086 03:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:30.086 ************************************ 00:05:30.086 END TEST env 00:05:30.086 ************************************ 00:05:30.086 03:37:48 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:30.086 03:37:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.086 03:37:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.086 03:37:48 -- common/autotest_common.sh@10 -- # set +x 00:05:30.086 ************************************ 00:05:30.086 START TEST rpc 00:05:30.086 ************************************ 00:05:30.086 03:37:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:30.345 * Looking for test storage... 00:05:30.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.345 03:37:49 -- rpc/rpc.sh@65 -- # spdk_pid=2257268 00:05:30.345 03:37:49 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:30.345 03:37:49 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.345 03:37:49 -- rpc/rpc.sh@67 -- # waitforlisten 2257268 00:05:30.345 03:37:49 -- common/autotest_common.sh@819 -- # '[' -z 2257268 ']' 00:05:30.345 03:37:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.345 03:37:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.345 03:37:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.345 03:37:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.345 03:37:49 -- common/autotest_common.sh@10 -- # set +x 00:05:30.345 [2024-07-14 03:37:49.093243] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:30.345 [2024-07-14 03:37:49.093332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257268 ] 00:05:30.345 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.346 [2024-07-14 03:37:49.150740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.346 [2024-07-14 03:37:49.232682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.346 [2024-07-14 03:37:49.232839] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.346 [2024-07-14 03:37:49.232879] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2257268' to capture a snapshot of events at runtime. 00:05:30.346 [2024-07-14 03:37:49.232893] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2257268 for offline analysis/debug. 00:05:30.346 [2024-07-14 03:37:49.232920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.282 03:37:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.282 03:37:50 -- common/autotest_common.sh@852 -- # return 0 00:05:31.282 03:37:50 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:31.282 03:37:50 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:31.282 03:37:50 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:31.282 03:37:50 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:31.282 03:37:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.282 03:37:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 ************************************ 00:05:31.282 START TEST rpc_integrity 00:05:31.282 ************************************ 00:05:31.282 03:37:50 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:31.282 03:37:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.282 03:37:50 -- rpc/rpc.sh@13 -- # jq length 00:05:31.282 03:37:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.282 03:37:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:31.282 03:37:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.282 { 00:05:31.282 "name": "Malloc0", 00:05:31.282 "aliases": [ 00:05:31.282 "8c49b266-5306-4fdf-96d3-88b1d62dc4e3" 00:05:31.282 ], 00:05:31.282 "product_name": "Malloc disk", 00:05:31.282 "block_size": 512, 00:05:31.282 "num_blocks": 16384, 00:05:31.282 "uuid": "8c49b266-5306-4fdf-96d3-88b1d62dc4e3", 00:05:31.282 "assigned_rate_limits": { 00:05:31.282 "rw_ios_per_sec": 0, 00:05:31.282 "rw_mbytes_per_sec": 0, 00:05:31.282 "r_mbytes_per_sec": 0, 00:05:31.282 "w_mbytes_per_sec": 0 00:05:31.282 }, 00:05:31.282 "claimed": false, 00:05:31.282 "zoned": false, 00:05:31.282 "supported_io_types": { 00:05:31.282 "read": true, 00:05:31.282 "write": true, 00:05:31.282 "unmap": true, 00:05:31.282 "write_zeroes": true, 00:05:31.282 "flush": true, 00:05:31.282 "reset": true, 00:05:31.282 "compare": false, 00:05:31.282 "compare_and_write": false, 00:05:31.282 "abort": true, 00:05:31.282 "nvme_admin": false, 00:05:31.282 "nvme_io": false 00:05:31.282 }, 00:05:31.282 "memory_domains": [ 00:05:31.282 { 00:05:31.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.282 "dma_device_type": 2 00:05:31.282 } 00:05:31.282 ], 00:05:31.282 "driver_specific": {} 00:05:31.282 } 00:05:31.282 ]' 00:05:31.282 03:37:50 -- rpc/rpc.sh@17 -- # jq length 00:05:31.282 03:37:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.282 03:37:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 [2024-07-14 03:37:50.135666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.282 [2024-07-14 03:37:50.135721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.282 [2024-07-14 03:37:50.135746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17613b0 00:05:31.282 [2024-07-14 03:37:50.135762] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.282 [2024-07-14 03:37:50.137114] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.282 [2024-07-14 03:37:50.137139] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.282 Passthru0 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.282 { 00:05:31.282 "name": "Malloc0", 00:05:31.282 "aliases": [ 00:05:31.282 "8c49b266-5306-4fdf-96d3-88b1d62dc4e3" 00:05:31.282 ], 00:05:31.282 "product_name": "Malloc disk", 00:05:31.282 "block_size": 512, 00:05:31.282 "num_blocks": 16384, 00:05:31.282 "uuid": "8c49b266-5306-4fdf-96d3-88b1d62dc4e3", 00:05:31.282 "assigned_rate_limits": { 00:05:31.282 "rw_ios_per_sec": 0, 00:05:31.282 "rw_mbytes_per_sec": 0, 00:05:31.282 "r_mbytes_per_sec": 0, 00:05:31.282 "w_mbytes_per_sec": 0 00:05:31.282 }, 00:05:31.282 "claimed": true, 00:05:31.282 "claim_type": "exclusive_write", 00:05:31.282 "zoned": false, 00:05:31.282 "supported_io_types": { 00:05:31.282 "read": true, 00:05:31.282 "write": true, 00:05:31.282 "unmap": true, 00:05:31.282 "write_zeroes": true, 00:05:31.282 "flush": true, 00:05:31.282 "reset": true, 00:05:31.282 "compare": false, 00:05:31.282 "compare_and_write": false, 00:05:31.282 "abort": true, 00:05:31.282 "nvme_admin": false, 00:05:31.282 "nvme_io": false 00:05:31.282 }, 00:05:31.282 "memory_domains": [ 00:05:31.282 { 00:05:31.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.282 "dma_device_type": 2 00:05:31.282 } 00:05:31.282 ], 00:05:31.282 "driver_specific": {} 00:05:31.282 }, 00:05:31.282 { 00:05:31.282 "name": "Passthru0", 00:05:31.282 "aliases": [ 00:05:31.282 "360b64dc-5f9a-56cd-a9a4-647d457f5b21" 00:05:31.282 ], 00:05:31.282 "product_name": "passthru", 00:05:31.282 "block_size": 512, 00:05:31.282 "num_blocks": 16384, 00:05:31.282 "uuid": "360b64dc-5f9a-56cd-a9a4-647d457f5b21", 00:05:31.282 "assigned_rate_limits": { 00:05:31.282 "rw_ios_per_sec": 0, 00:05:31.282 "rw_mbytes_per_sec": 0, 00:05:31.282 "r_mbytes_per_sec": 0, 00:05:31.282 "w_mbytes_per_sec": 0 00:05:31.282 }, 00:05:31.282 "claimed": false, 00:05:31.282 "zoned": false, 00:05:31.282 "supported_io_types": { 00:05:31.282 "read": true, 00:05:31.282 "write": true, 00:05:31.282 "unmap": true, 00:05:31.282 "write_zeroes": true, 00:05:31.282 "flush": true, 00:05:31.282 "reset": true, 00:05:31.282 "compare": false, 00:05:31.282 "compare_and_write": false, 00:05:31.282 "abort": true, 00:05:31.282 "nvme_admin": false, 00:05:31.282 "nvme_io": false 00:05:31.282 }, 00:05:31.282 "memory_domains": [ 00:05:31.282 { 00:05:31.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.282 "dma_device_type": 2 00:05:31.282 } 00:05:31.282 ], 00:05:31.282 "driver_specific": { 00:05:31.282 "passthru": { 00:05:31.282 "name": "Passthru0", 00:05:31.282 "base_bdev_name": "Malloc0" 00:05:31.282 } 00:05:31.282 } 00:05:31.282 } 00:05:31.282 ]' 00:05:31.282 03:37:50 -- rpc/rpc.sh@21 -- # jq length 00:05:31.282 03:37:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.282 03:37:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.282 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.282 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.282 03:37:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.282 03:37:50 -- rpc/rpc.sh@26 -- # jq length 00:05:31.542 03:37:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.542 00:05:31.543 real 0m0.231s 00:05:31.543 user 0m0.151s 00:05:31.543 sys 0m0.022s 00:05:31.543 03:37:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 ************************************ 00:05:31.543 END TEST rpc_integrity 00:05:31.543 ************************************ 00:05:31.543 03:37:50 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.543 03:37:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.543 03:37:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 ************************************ 00:05:31.543 START TEST rpc_plugins 00:05:31.543 ************************************ 00:05:31.543 03:37:50 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:31.543 03:37:50 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.543 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.543 03:37:50 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.543 03:37:50 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.543 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.543 03:37:50 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.543 { 00:05:31.543 "name": "Malloc1", 00:05:31.543 "aliases": [ 00:05:31.543 "10088c6a-d342-4677-b5cf-39d8f9d1116e" 00:05:31.543 ], 00:05:31.543 "product_name": "Malloc disk", 00:05:31.543 "block_size": 4096, 00:05:31.543 "num_blocks": 256, 00:05:31.543 "uuid": "10088c6a-d342-4677-b5cf-39d8f9d1116e", 00:05:31.543 "assigned_rate_limits": { 00:05:31.543 "rw_ios_per_sec": 0, 00:05:31.543 "rw_mbytes_per_sec": 0, 00:05:31.543 "r_mbytes_per_sec": 0, 00:05:31.543 "w_mbytes_per_sec": 0 00:05:31.543 }, 00:05:31.543 "claimed": false, 00:05:31.543 "zoned": false, 00:05:31.543 "supported_io_types": { 00:05:31.543 "read": true, 00:05:31.543 "write": true, 00:05:31.543 "unmap": true, 00:05:31.543 "write_zeroes": true, 00:05:31.543 "flush": true, 00:05:31.543 "reset": true, 00:05:31.543 "compare": false, 00:05:31.543 "compare_and_write": false, 00:05:31.543 "abort": true, 00:05:31.543 "nvme_admin": false, 00:05:31.543 "nvme_io": false 00:05:31.543 }, 00:05:31.543 "memory_domains": [ 00:05:31.543 { 00:05:31.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.543 "dma_device_type": 2 00:05:31.543 } 00:05:31.543 ], 00:05:31.543 "driver_specific": {} 00:05:31.543 } 00:05:31.543 ]' 00:05:31.543 03:37:50 -- rpc/rpc.sh@32 -- # jq length 00:05:31.543 03:37:50 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.543 03:37:50 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.543 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.543 03:37:50 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.543 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.543 03:37:50 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.543 03:37:50 -- rpc/rpc.sh@36 -- # jq length 00:05:31.543 03:37:50 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.543 00:05:31.543 real 0m0.112s 00:05:31.543 user 0m0.072s 00:05:31.543 sys 0m0.011s 00:05:31.543 03:37:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 ************************************ 00:05:31.543 END TEST rpc_plugins 00:05:31.543 ************************************ 00:05:31.543 03:37:50 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.543 03:37:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.543 03:37:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 ************************************ 00:05:31.543 START TEST rpc_trace_cmd_test 00:05:31.543 ************************************ 00:05:31.543 03:37:50 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:31.543 03:37:50 -- rpc/rpc.sh@40 -- # local info 00:05:31.543 03:37:50 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.543 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.543 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.543 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.543 03:37:50 -- rpc/rpc.sh@42 -- # info='{ 00:05:31.543 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2257268", 00:05:31.543 "tpoint_group_mask": "0x8", 00:05:31.543 "iscsi_conn": { 00:05:31.543 "mask": "0x2", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "scsi": { 00:05:31.543 "mask": "0x4", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "bdev": { 00:05:31.543 "mask": "0x8", 00:05:31.543 "tpoint_mask": "0xffffffffffffffff" 00:05:31.543 }, 00:05:31.543 "nvmf_rdma": { 00:05:31.543 "mask": "0x10", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "nvmf_tcp": { 00:05:31.543 "mask": "0x20", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "ftl": { 00:05:31.543 "mask": "0x40", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "blobfs": { 00:05:31.543 "mask": "0x80", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "dsa": { 00:05:31.543 "mask": "0x200", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "thread": { 00:05:31.543 "mask": "0x400", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "nvme_pcie": { 00:05:31.543 "mask": "0x800", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "iaa": { 00:05:31.543 "mask": "0x1000", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "nvme_tcp": { 00:05:31.543 "mask": "0x2000", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 }, 00:05:31.543 "bdev_nvme": { 00:05:31.543 "mask": "0x4000", 00:05:31.543 "tpoint_mask": "0x0" 00:05:31.543 } 00:05:31.543 }' 00:05:31.543 03:37:50 -- rpc/rpc.sh@43 -- # jq length 00:05:31.543 03:37:50 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:31.543 03:37:50 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.803 03:37:50 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.803 03:37:50 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.803 03:37:50 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.803 03:37:50 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.803 03:37:50 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.803 03:37:50 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.803 03:37:50 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.803 00:05:31.803 real 0m0.200s 00:05:31.803 user 0m0.172s 00:05:31.803 sys 0m0.018s 00:05:31.803 03:37:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.803 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 ************************************ 00:05:31.803 END TEST rpc_trace_cmd_test 00:05:31.803 ************************************ 00:05:31.803 03:37:50 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:31.803 03:37:50 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.803 03:37:50 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.803 03:37:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.803 03:37:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.803 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 ************************************ 00:05:31.803 START TEST rpc_daemon_integrity 00:05:31.803 ************************************ 00:05:31.803 03:37:50 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:31.803 03:37:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.803 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.803 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.803 03:37:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.803 03:37:50 -- rpc/rpc.sh@13 -- # jq length 00:05:31.803 03:37:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.803 03:37:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.803 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.803 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.803 03:37:50 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.803 03:37:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.803 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.803 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:31.803 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.803 03:37:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.803 { 00:05:31.803 "name": "Malloc2", 00:05:31.803 "aliases": [ 00:05:31.803 "a5692a24-c1f5-44cd-88c9-47d27fbd9971" 00:05:31.803 ], 00:05:31.803 "product_name": "Malloc disk", 00:05:31.803 "block_size": 512, 00:05:31.803 "num_blocks": 16384, 00:05:31.803 "uuid": "a5692a24-c1f5-44cd-88c9-47d27fbd9971", 00:05:31.803 "assigned_rate_limits": { 00:05:31.803 "rw_ios_per_sec": 0, 00:05:31.803 "rw_mbytes_per_sec": 0, 00:05:31.803 "r_mbytes_per_sec": 0, 00:05:31.803 "w_mbytes_per_sec": 0 00:05:31.803 }, 00:05:31.803 "claimed": false, 00:05:31.803 "zoned": false, 00:05:31.803 "supported_io_types": { 00:05:31.803 "read": true, 00:05:31.803 "write": true, 00:05:31.803 "unmap": true, 00:05:31.803 "write_zeroes": true, 00:05:31.803 "flush": true, 00:05:31.803 "reset": true, 00:05:31.803 "compare": false, 00:05:31.803 "compare_and_write": false, 00:05:31.803 "abort": true, 00:05:31.803 "nvme_admin": false, 00:05:31.803 "nvme_io": false 00:05:31.803 }, 00:05:31.803 "memory_domains": [ 00:05:31.803 { 00:05:31.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.803 "dma_device_type": 2 00:05:31.803 } 00:05:31.803 ], 00:05:31.803 "driver_specific": {} 00:05:31.803 } 00:05:31.803 ]' 00:05:31.803 03:37:50 -- rpc/rpc.sh@17 -- # jq length 00:05:32.063 03:37:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.063 03:37:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:32.063 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.063 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:32.063 [2024-07-14 03:37:50.753434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:32.063 [2024-07-14 03:37:50.753480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.063 [2024-07-14 03:37:50.753507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1761020 00:05:32.063 [2024-07-14 03:37:50.753523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.063 [2024-07-14 03:37:50.754863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.063 [2024-07-14 03:37:50.754899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.063 Passthru0 00:05:32.063 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.063 03:37:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.063 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.063 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:32.063 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.063 03:37:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.063 { 00:05:32.063 "name": "Malloc2", 00:05:32.063 "aliases": [ 00:05:32.063 "a5692a24-c1f5-44cd-88c9-47d27fbd9971" 00:05:32.063 ], 00:05:32.063 "product_name": "Malloc disk", 00:05:32.063 "block_size": 512, 00:05:32.063 "num_blocks": 16384, 00:05:32.063 "uuid": "a5692a24-c1f5-44cd-88c9-47d27fbd9971", 00:05:32.063 "assigned_rate_limits": { 00:05:32.063 "rw_ios_per_sec": 0, 00:05:32.063 "rw_mbytes_per_sec": 0, 00:05:32.063 "r_mbytes_per_sec": 0, 00:05:32.063 "w_mbytes_per_sec": 0 00:05:32.063 }, 00:05:32.063 "claimed": true, 00:05:32.063 "claim_type": "exclusive_write", 00:05:32.063 "zoned": false, 00:05:32.063 "supported_io_types": { 00:05:32.063 "read": true, 00:05:32.063 "write": true, 00:05:32.063 "unmap": true, 00:05:32.063 "write_zeroes": true, 00:05:32.063 "flush": true, 00:05:32.063 "reset": true, 00:05:32.063 "compare": false, 00:05:32.063 "compare_and_write": false, 00:05:32.063 "abort": true, 00:05:32.063 "nvme_admin": false, 00:05:32.063 "nvme_io": false 00:05:32.063 }, 00:05:32.063 "memory_domains": [ 00:05:32.063 { 00:05:32.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.063 "dma_device_type": 2 00:05:32.063 } 00:05:32.063 ], 00:05:32.063 "driver_specific": {} 00:05:32.063 }, 00:05:32.063 { 00:05:32.063 "name": "Passthru0", 00:05:32.063 "aliases": [ 00:05:32.063 "ed96f9b7-a078-5955-979d-bda1acdd084b" 00:05:32.063 ], 00:05:32.063 "product_name": "passthru", 00:05:32.063 "block_size": 512, 00:05:32.063 "num_blocks": 16384, 00:05:32.063 "uuid": "ed96f9b7-a078-5955-979d-bda1acdd084b", 00:05:32.063 "assigned_rate_limits": { 00:05:32.063 "rw_ios_per_sec": 0, 00:05:32.063 "rw_mbytes_per_sec": 0, 00:05:32.063 "r_mbytes_per_sec": 0, 00:05:32.063 "w_mbytes_per_sec": 0 00:05:32.063 }, 00:05:32.063 "claimed": false, 00:05:32.063 "zoned": false, 00:05:32.063 "supported_io_types": { 00:05:32.063 "read": true, 00:05:32.063 "write": true, 00:05:32.063 "unmap": true, 00:05:32.063 "write_zeroes": true, 00:05:32.063 "flush": true, 00:05:32.063 "reset": true, 00:05:32.063 "compare": false, 00:05:32.063 "compare_and_write": false, 00:05:32.063 "abort": true, 00:05:32.063 "nvme_admin": false, 00:05:32.063 "nvme_io": false 00:05:32.063 }, 00:05:32.063 "memory_domains": [ 00:05:32.063 { 00:05:32.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.064 "dma_device_type": 2 00:05:32.064 } 00:05:32.064 ], 00:05:32.064 "driver_specific": { 00:05:32.064 "passthru": { 00:05:32.064 "name": "Passthru0", 00:05:32.064 "base_bdev_name": "Malloc2" 00:05:32.064 } 00:05:32.064 } 00:05:32.064 } 00:05:32.064 ]' 00:05:32.064 03:37:50 -- rpc/rpc.sh@21 -- # jq length 00:05:32.064 03:37:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.064 03:37:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.064 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.064 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:32.064 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.064 03:37:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.064 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.064 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:32.064 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.064 03:37:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.064 03:37:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.064 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:32.064 03:37:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.064 03:37:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.064 03:37:50 -- rpc/rpc.sh@26 -- # jq length 00:05:32.064 03:37:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.064 00:05:32.064 real 0m0.228s 00:05:32.064 user 0m0.150s 00:05:32.064 sys 0m0.018s 00:05:32.064 03:37:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.064 03:37:50 -- common/autotest_common.sh@10 -- # set +x 00:05:32.064 ************************************ 00:05:32.064 END TEST rpc_daemon_integrity 00:05:32.064 ************************************ 00:05:32.064 03:37:50 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.064 03:37:50 -- rpc/rpc.sh@84 -- # killprocess 2257268 00:05:32.064 03:37:50 -- common/autotest_common.sh@926 -- # '[' -z 2257268 ']' 00:05:32.064 03:37:50 -- common/autotest_common.sh@930 -- # kill -0 2257268 00:05:32.064 03:37:50 -- common/autotest_common.sh@931 -- # uname 00:05:32.064 03:37:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:32.064 03:37:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2257268 00:05:32.064 03:37:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:32.064 03:37:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:32.064 03:37:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2257268' 00:05:32.064 killing process with pid 2257268 00:05:32.064 03:37:50 -- common/autotest_common.sh@945 -- # kill 2257268 00:05:32.064 03:37:50 -- common/autotest_common.sh@950 -- # wait 2257268 00:05:32.633 00:05:32.633 real 0m2.321s 00:05:32.633 user 0m2.953s 00:05:32.633 sys 0m0.576s 00:05:32.633 03:37:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.633 03:37:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.633 ************************************ 00:05:32.633 END TEST rpc 00:05:32.633 ************************************ 00:05:32.633 03:37:51 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.633 03:37:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.633 03:37:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.633 03:37:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.633 ************************************ 00:05:32.633 START TEST rpc_client 00:05:32.633 ************************************ 00:05:32.633 03:37:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.633 * Looking for test storage... 00:05:32.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:32.633 03:37:51 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:32.633 OK 00:05:32.633 03:37:51 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:32.633 00:05:32.633 real 0m0.066s 00:05:32.633 user 0m0.019s 00:05:32.633 sys 0m0.052s 00:05:32.633 03:37:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.633 03:37:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.633 ************************************ 00:05:32.633 END TEST rpc_client 00:05:32.633 ************************************ 00:05:32.633 03:37:51 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.633 03:37:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.633 03:37:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.633 03:37:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.633 ************************************ 00:05:32.633 START TEST json_config 00:05:32.633 ************************************ 00:05:32.633 03:37:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.633 03:37:51 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.633 03:37:51 -- nvmf/common.sh@7 -- # uname -s 00:05:32.633 03:37:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.633 03:37:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.633 03:37:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.633 03:37:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.633 03:37:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.633 03:37:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.633 03:37:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.633 03:37:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.633 03:37:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.633 03:37:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.633 03:37:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.633 03:37:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.633 03:37:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.633 03:37:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.633 03:37:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.633 03:37:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.633 03:37:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.633 03:37:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.633 03:37:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.633 03:37:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.633 03:37:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.633 03:37:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.633 03:37:51 -- paths/export.sh@5 -- # export PATH 00:05:32.633 03:37:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.633 03:37:51 -- nvmf/common.sh@46 -- # : 0 00:05:32.633 03:37:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:32.633 03:37:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:32.633 03:37:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:32.633 03:37:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.633 03:37:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.633 03:37:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:32.633 03:37:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:32.633 03:37:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:32.633 03:37:51 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:32.633 03:37:51 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:32.633 03:37:51 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:32.634 03:37:51 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:32.634 03:37:51 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:32.634 03:37:51 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:32.634 03:37:51 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:32.634 03:37:51 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:32.634 03:37:51 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:32.634 03:37:51 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:32.634 03:37:51 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:32.634 03:37:51 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:32.634 03:37:51 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:32.634 03:37:51 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.634 03:37:51 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:32.634 INFO: JSON configuration test init 00:05:32.634 03:37:51 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:32.634 03:37:51 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:32.634 03:37:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.634 03:37:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.634 03:37:51 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:32.634 03:37:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.634 03:37:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.634 03:37:51 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:32.634 03:37:51 -- json_config/json_config.sh@98 -- # local app=target 00:05:32.634 03:37:51 -- json_config/json_config.sh@99 -- # shift 00:05:32.634 03:37:51 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:32.634 03:37:51 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:32.634 03:37:51 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:32.634 03:37:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:32.634 03:37:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:32.634 03:37:51 -- json_config/json_config.sh@111 -- # app_pid[$app]=2257749 00:05:32.634 03:37:51 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:32.634 03:37:51 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:32.634 Waiting for target to run... 00:05:32.634 03:37:51 -- json_config/json_config.sh@114 -- # waitforlisten 2257749 /var/tmp/spdk_tgt.sock 00:05:32.634 03:37:51 -- common/autotest_common.sh@819 -- # '[' -z 2257749 ']' 00:05:32.634 03:37:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.634 03:37:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.634 03:37:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.634 03:37:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.634 03:37:51 -- common/autotest_common.sh@10 -- # set +x 00:05:32.634 [2024-07-14 03:37:51.544333] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:32.634 [2024-07-14 03:37:51.544435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257749 ] 00:05:32.634 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.201 [2024-07-14 03:37:52.063061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.201 [2024-07-14 03:37:52.138802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.201 [2024-07-14 03:37:52.139011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.768 03:37:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.768 03:37:52 -- common/autotest_common.sh@852 -- # return 0 00:05:33.768 03:37:52 -- json_config/json_config.sh@115 -- # echo '' 00:05:33.768 00:05:33.768 03:37:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:33.768 03:37:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:33.768 03:37:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:33.768 03:37:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.768 03:37:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:33.768 03:37:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:33.768 03:37:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.768 03:37:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.768 03:37:52 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:33.768 03:37:52 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:33.768 03:37:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.061 03:37:55 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:37.061 03:37:55 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:37.061 03:37:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:37.061 03:37:55 -- common/autotest_common.sh@10 -- # set +x 00:05:37.061 03:37:55 -- json_config/json_config.sh@48 -- # local ret=0 00:05:37.061 03:37:55 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.061 03:37:55 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:37.061 03:37:55 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:37.061 03:37:55 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:37.061 03:37:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.061 03:37:55 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:37.061 03:37:55 -- json_config/json_config.sh@51 -- # local get_types 00:05:37.061 03:37:55 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:37.061 03:37:55 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:37.061 03:37:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:37.061 03:37:55 -- common/autotest_common.sh@10 -- # set +x 00:05:37.061 03:37:55 -- json_config/json_config.sh@58 -- # return 0 00:05:37.061 03:37:55 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:37.061 03:37:55 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:37.061 03:37:55 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:37.061 03:37:55 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:37.061 03:37:55 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:37.061 03:37:55 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:37.061 03:37:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:37.061 03:37:55 -- common/autotest_common.sh@10 -- # set +x 00:05:37.061 03:37:55 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.061 03:37:55 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:37.061 03:37:55 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:37.061 03:37:55 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.061 03:37:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.321 MallocForNvmf0 00:05:37.321 03:37:56 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.321 03:37:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.579 MallocForNvmf1 00:05:37.579 03:37:56 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.579 03:37:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.838 [2024-07-14 03:37:56.601289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.838 03:37:56 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.838 03:37:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.098 03:37:56 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.098 03:37:56 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.357 03:37:57 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.357 03:37:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.617 03:37:57 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.617 03:37:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.617 [2024-07-14 03:37:57.540433] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.617 03:37:57 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:38.617 03:37:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.617 03:37:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.876 03:37:57 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:38.876 03:37:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.876 03:37:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.876 03:37:57 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:38.876 03:37:57 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.876 03:37:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.135 MallocBdevForConfigChangeCheck 00:05:39.135 03:37:57 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:39.135 03:37:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:39.135 03:37:57 -- common/autotest_common.sh@10 -- # set +x 00:05:39.135 03:37:57 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:39.135 03:37:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.393 03:37:58 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:39.393 INFO: shutting down applications... 00:05:39.393 03:37:58 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:39.393 03:37:58 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:39.393 03:37:58 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:39.393 03:37:58 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:41.301 Calling clear_iscsi_subsystem 00:05:41.301 Calling clear_nvmf_subsystem 00:05:41.301 Calling clear_nbd_subsystem 00:05:41.301 Calling clear_ublk_subsystem 00:05:41.301 Calling clear_vhost_blk_subsystem 00:05:41.301 Calling clear_vhost_scsi_subsystem 00:05:41.301 Calling clear_scheduler_subsystem 00:05:41.301 Calling clear_bdev_subsystem 00:05:41.301 Calling clear_accel_subsystem 00:05:41.301 Calling clear_vmd_subsystem 00:05:41.301 Calling clear_sock_subsystem 00:05:41.301 Calling clear_iobuf_subsystem 00:05:41.301 03:37:59 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:41.301 03:37:59 -- json_config/json_config.sh@396 -- # count=100 00:05:41.301 03:37:59 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:41.301 03:37:59 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:41.301 03:37:59 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.301 03:37:59 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:41.559 03:38:00 -- json_config/json_config.sh@398 -- # break 00:05:41.559 03:38:00 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:41.559 03:38:00 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:41.559 03:38:00 -- json_config/json_config.sh@120 -- # local app=target 00:05:41.559 03:38:00 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:41.559 03:38:00 -- json_config/json_config.sh@124 -- # [[ -n 2257749 ]] 00:05:41.559 03:38:00 -- json_config/json_config.sh@127 -- # kill -SIGINT 2257749 00:05:41.559 03:38:00 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:41.559 03:38:00 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:41.559 03:38:00 -- json_config/json_config.sh@130 -- # kill -0 2257749 00:05:41.559 03:38:00 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:41.818 03:38:00 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:41.818 03:38:00 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:41.818 03:38:00 -- json_config/json_config.sh@130 -- # kill -0 2257749 00:05:41.818 03:38:00 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:41.818 03:38:00 -- json_config/json_config.sh@132 -- # break 00:05:41.818 03:38:00 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:41.818 03:38:00 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:41.818 SPDK target shutdown done 00:05:41.818 03:38:00 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:41.818 INFO: relaunching applications... 00:05:41.818 03:38:00 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.818 03:38:00 -- json_config/json_config.sh@98 -- # local app=target 00:05:41.818 03:38:00 -- json_config/json_config.sh@99 -- # shift 00:05:41.818 03:38:00 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:41.818 03:38:00 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:41.818 03:38:00 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:41.818 03:38:00 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:41.818 03:38:00 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:41.818 03:38:00 -- json_config/json_config.sh@111 -- # app_pid[$app]=2258973 00:05:41.818 03:38:00 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.818 03:38:00 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:41.818 Waiting for target to run... 00:05:41.818 03:38:00 -- json_config/json_config.sh@114 -- # waitforlisten 2258973 /var/tmp/spdk_tgt.sock 00:05:41.818 03:38:00 -- common/autotest_common.sh@819 -- # '[' -z 2258973 ']' 00:05:41.818 03:38:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.818 03:38:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.818 03:38:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.818 03:38:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.818 03:38:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.076 [2024-07-14 03:38:00.799491] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:42.076 [2024-07-14 03:38:00.799587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258973 ] 00:05:42.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.334 [2024-07-14 03:38:01.168280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.334 [2024-07-14 03:38:01.231445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.334 [2024-07-14 03:38:01.231627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.620 [2024-07-14 03:38:04.250794] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.620 [2024-07-14 03:38:04.283282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.878 03:38:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.878 03:38:04 -- common/autotest_common.sh@852 -- # return 0 00:05:45.878 03:38:04 -- json_config/json_config.sh@115 -- # echo '' 00:05:45.878 00:05:45.878 03:38:04 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:45.878 03:38:04 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.878 INFO: Checking if target configuration is the same... 00:05:45.878 03:38:04 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.878 03:38:04 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:45.878 03:38:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.878 + '[' 2 -ne 2 ']' 00:05:45.878 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.878 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.878 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.878 +++ basename /dev/fd/62 00:05:45.878 ++ mktemp /tmp/62.XXX 00:05:45.878 + tmp_file_1=/tmp/62.ZJT 00:05:45.878 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.878 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.878 + tmp_file_2=/tmp/spdk_tgt_config.json.yH8 00:05:45.878 + ret=0 00:05:45.878 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.138 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.138 + diff -u /tmp/62.ZJT /tmp/spdk_tgt_config.json.yH8 00:05:46.138 + echo 'INFO: JSON config files are the same' 00:05:46.138 INFO: JSON config files are the same 00:05:46.138 + rm /tmp/62.ZJT /tmp/spdk_tgt_config.json.yH8 00:05:46.138 + exit 0 00:05:46.138 03:38:05 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:46.138 03:38:05 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:46.138 INFO: changing configuration and checking if this can be detected... 00:05:46.138 03:38:05 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.138 03:38:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.403 03:38:05 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.403 03:38:05 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:46.403 03:38:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.403 + '[' 2 -ne 2 ']' 00:05:46.403 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.403 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.403 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.403 +++ basename /dev/fd/62 00:05:46.403 ++ mktemp /tmp/62.XXX 00:05:46.403 + tmp_file_1=/tmp/62.BvD 00:05:46.403 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.403 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.403 + tmp_file_2=/tmp/spdk_tgt_config.json.bzB 00:05:46.403 + ret=0 00:05:46.403 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.970 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.970 + diff -u /tmp/62.BvD /tmp/spdk_tgt_config.json.bzB 00:05:46.970 + ret=1 00:05:46.970 + echo '=== Start of file: /tmp/62.BvD ===' 00:05:46.970 + cat /tmp/62.BvD 00:05:46.970 + echo '=== End of file: /tmp/62.BvD ===' 00:05:46.970 + echo '' 00:05:46.970 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bzB ===' 00:05:46.970 + cat /tmp/spdk_tgt_config.json.bzB 00:05:46.970 + echo '=== End of file: /tmp/spdk_tgt_config.json.bzB ===' 00:05:46.970 + echo '' 00:05:46.970 + rm /tmp/62.BvD /tmp/spdk_tgt_config.json.bzB 00:05:46.970 + exit 1 00:05:46.970 03:38:05 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:46.970 INFO: configuration change detected. 00:05:46.970 03:38:05 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:46.970 03:38:05 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:46.970 03:38:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.970 03:38:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.970 03:38:05 -- json_config/json_config.sh@360 -- # local ret=0 00:05:46.970 03:38:05 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:46.970 03:38:05 -- json_config/json_config.sh@370 -- # [[ -n 2258973 ]] 00:05:46.970 03:38:05 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:46.970 03:38:05 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:46.970 03:38:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.970 03:38:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.970 03:38:05 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:46.970 03:38:05 -- json_config/json_config.sh@246 -- # uname -s 00:05:46.970 03:38:05 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:46.970 03:38:05 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:46.970 03:38:05 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:46.970 03:38:05 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:46.970 03:38:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.970 03:38:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.970 03:38:05 -- json_config/json_config.sh@376 -- # killprocess 2258973 00:05:46.970 03:38:05 -- common/autotest_common.sh@926 -- # '[' -z 2258973 ']' 00:05:46.970 03:38:05 -- common/autotest_common.sh@930 -- # kill -0 2258973 00:05:46.970 03:38:05 -- common/autotest_common.sh@931 -- # uname 00:05:46.970 03:38:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.970 03:38:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2258973 00:05:46.970 03:38:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.970 03:38:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.970 03:38:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2258973' 00:05:46.970 killing process with pid 2258973 00:05:46.970 03:38:05 -- common/autotest_common.sh@945 -- # kill 2258973 00:05:46.970 03:38:05 -- common/autotest_common.sh@950 -- # wait 2258973 00:05:48.879 03:38:07 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.879 03:38:07 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:48.879 03:38:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:48.879 03:38:07 -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 03:38:07 -- json_config/json_config.sh@381 -- # return 0 00:05:48.879 03:38:07 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:48.879 INFO: Success 00:05:48.879 00:05:48.879 real 0m15.941s 00:05:48.879 user 0m18.080s 00:05:48.879 sys 0m2.154s 00:05:48.879 03:38:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.879 03:38:07 -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 ************************************ 00:05:48.879 END TEST json_config 00:05:48.879 ************************************ 00:05:48.879 03:38:07 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.879 03:38:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.879 03:38:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.879 03:38:07 -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 ************************************ 00:05:48.879 START TEST json_config_extra_key 00:05:48.879 ************************************ 00:05:48.879 03:38:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.879 03:38:07 -- nvmf/common.sh@7 -- # uname -s 00:05:48.879 03:38:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.879 03:38:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.879 03:38:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.879 03:38:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.879 03:38:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.879 03:38:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.879 03:38:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.879 03:38:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.879 03:38:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.879 03:38:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.879 03:38:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:48.879 03:38:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:48.879 03:38:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.879 03:38:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.879 03:38:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.879 03:38:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.879 03:38:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.879 03:38:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.879 03:38:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.879 03:38:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.879 03:38:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.879 03:38:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.879 03:38:07 -- paths/export.sh@5 -- # export PATH 00:05:48.879 03:38:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.879 03:38:07 -- nvmf/common.sh@46 -- # : 0 00:05:48.879 03:38:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:48.879 03:38:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:48.879 03:38:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:48.879 03:38:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.879 03:38:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.879 03:38:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:48.879 03:38:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:48.879 03:38:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:48.879 INFO: launching applications... 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2259916 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:48.879 Waiting for target to run... 00:05:48.879 03:38:07 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2259916 /var/tmp/spdk_tgt.sock 00:05:48.879 03:38:07 -- common/autotest_common.sh@819 -- # '[' -z 2259916 ']' 00:05:48.879 03:38:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.879 03:38:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.879 03:38:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.879 03:38:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.879 03:38:07 -- common/autotest_common.sh@10 -- # set +x 00:05:48.879 [2024-07-14 03:38:07.501466] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:48.879 [2024-07-14 03:38:07.501563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259916 ] 00:05:48.879 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.139 [2024-07-14 03:38:07.844120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.139 [2024-07-14 03:38:07.905169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.139 [2024-07-14 03:38:07.905347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.709 03:38:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.709 03:38:08 -- common/autotest_common.sh@852 -- # return 0 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:49.709 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:49.709 INFO: shutting down applications... 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2259916 ]] 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2259916 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2259916 00:05:49.709 03:38:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2259916 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:50.280 SPDK target shutdown done 00:05:50.280 03:38:08 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:50.280 Success 00:05:50.280 00:05:50.280 real 0m1.556s 00:05:50.280 user 0m1.552s 00:05:50.280 sys 0m0.431s 00:05:50.280 03:38:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.280 03:38:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.280 ************************************ 00:05:50.280 END TEST json_config_extra_key 00:05:50.280 ************************************ 00:05:50.280 03:38:08 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.280 03:38:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.280 03:38:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.280 03:38:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.280 ************************************ 00:05:50.280 START TEST alias_rpc 00:05:50.280 ************************************ 00:05:50.280 03:38:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.280 * Looking for test storage... 00:05:50.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:50.280 03:38:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.280 03:38:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2260200 00:05:50.280 03:38:09 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.280 03:38:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2260200 00:05:50.280 03:38:09 -- common/autotest_common.sh@819 -- # '[' -z 2260200 ']' 00:05:50.280 03:38:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.280 03:38:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.280 03:38:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.280 03:38:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.280 03:38:09 -- common/autotest_common.sh@10 -- # set +x 00:05:50.280 [2024-07-14 03:38:09.080125] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:50.280 [2024-07-14 03:38:09.080237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260200 ] 00:05:50.280 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.280 [2024-07-14 03:38:09.136382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.280 [2024-07-14 03:38:09.218570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.280 [2024-07-14 03:38:09.218710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.216 03:38:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.216 03:38:09 -- common/autotest_common.sh@852 -- # return 0 00:05:51.216 03:38:09 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:51.476 03:38:10 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2260200 00:05:51.476 03:38:10 -- common/autotest_common.sh@926 -- # '[' -z 2260200 ']' 00:05:51.476 03:38:10 -- common/autotest_common.sh@930 -- # kill -0 2260200 00:05:51.476 03:38:10 -- common/autotest_common.sh@931 -- # uname 00:05:51.476 03:38:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.476 03:38:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2260200 00:05:51.476 03:38:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.476 03:38:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.476 03:38:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2260200' 00:05:51.476 killing process with pid 2260200 00:05:51.476 03:38:10 -- common/autotest_common.sh@945 -- # kill 2260200 00:05:51.476 03:38:10 -- common/autotest_common.sh@950 -- # wait 2260200 00:05:52.046 00:05:52.046 real 0m1.698s 00:05:52.046 user 0m1.948s 00:05:52.046 sys 0m0.448s 00:05:52.046 03:38:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.046 03:38:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.046 ************************************ 00:05:52.046 END TEST alias_rpc 00:05:52.046 ************************************ 00:05:52.046 03:38:10 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:52.046 03:38:10 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.046 03:38:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.046 03:38:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.046 03:38:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.046 ************************************ 00:05:52.046 START TEST spdkcli_tcp 00:05:52.046 ************************************ 00:05:52.046 03:38:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.046 * Looking for test storage... 00:05:52.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:52.046 03:38:10 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:52.046 03:38:10 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:52.046 03:38:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.046 03:38:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2260423 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:52.046 03:38:10 -- spdkcli/tcp.sh@27 -- # waitforlisten 2260423 00:05:52.046 03:38:10 -- common/autotest_common.sh@819 -- # '[' -z 2260423 ']' 00:05:52.046 03:38:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.046 03:38:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.046 03:38:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.046 03:38:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.046 03:38:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.046 [2024-07-14 03:38:10.815301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.046 [2024-07-14 03:38:10.815396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260423 ] 00:05:52.046 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.046 [2024-07-14 03:38:10.872204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.046 [2024-07-14 03:38:10.954405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.046 [2024-07-14 03:38:10.954637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.046 [2024-07-14 03:38:10.954642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.982 03:38:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.982 03:38:11 -- common/autotest_common.sh@852 -- # return 0 00:05:52.982 03:38:11 -- spdkcli/tcp.sh@31 -- # socat_pid=2260564 00:05:52.982 03:38:11 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:52.982 03:38:11 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:53.243 [ 00:05:53.243 "bdev_malloc_delete", 00:05:53.243 "bdev_malloc_create", 00:05:53.243 "bdev_null_resize", 00:05:53.243 "bdev_null_delete", 00:05:53.243 "bdev_null_create", 00:05:53.243 "bdev_nvme_cuse_unregister", 00:05:53.243 "bdev_nvme_cuse_register", 00:05:53.243 "bdev_opal_new_user", 00:05:53.243 "bdev_opal_set_lock_state", 00:05:53.243 "bdev_opal_delete", 00:05:53.243 "bdev_opal_get_info", 00:05:53.243 "bdev_opal_create", 00:05:53.243 "bdev_nvme_opal_revert", 00:05:53.243 "bdev_nvme_opal_init", 00:05:53.243 "bdev_nvme_send_cmd", 00:05:53.243 "bdev_nvme_get_path_iostat", 00:05:53.243 "bdev_nvme_get_mdns_discovery_info", 00:05:53.243 "bdev_nvme_stop_mdns_discovery", 00:05:53.243 "bdev_nvme_start_mdns_discovery", 00:05:53.243 "bdev_nvme_set_multipath_policy", 00:05:53.243 "bdev_nvme_set_preferred_path", 00:05:53.243 "bdev_nvme_get_io_paths", 00:05:53.243 "bdev_nvme_remove_error_injection", 00:05:53.243 "bdev_nvme_add_error_injection", 00:05:53.243 "bdev_nvme_get_discovery_info", 00:05:53.243 "bdev_nvme_stop_discovery", 00:05:53.243 "bdev_nvme_start_discovery", 00:05:53.243 "bdev_nvme_get_controller_health_info", 00:05:53.243 "bdev_nvme_disable_controller", 00:05:53.243 "bdev_nvme_enable_controller", 00:05:53.243 "bdev_nvme_reset_controller", 00:05:53.243 "bdev_nvme_get_transport_statistics", 00:05:53.243 "bdev_nvme_apply_firmware", 00:05:53.243 "bdev_nvme_detach_controller", 00:05:53.243 "bdev_nvme_get_controllers", 00:05:53.243 "bdev_nvme_attach_controller", 00:05:53.243 "bdev_nvme_set_hotplug", 00:05:53.243 "bdev_nvme_set_options", 00:05:53.243 "bdev_passthru_delete", 00:05:53.243 "bdev_passthru_create", 00:05:53.243 "bdev_lvol_grow_lvstore", 00:05:53.243 "bdev_lvol_get_lvols", 00:05:53.243 "bdev_lvol_get_lvstores", 00:05:53.243 "bdev_lvol_delete", 00:05:53.243 "bdev_lvol_set_read_only", 00:05:53.243 "bdev_lvol_resize", 00:05:53.243 "bdev_lvol_decouple_parent", 00:05:53.243 "bdev_lvol_inflate", 00:05:53.243 "bdev_lvol_rename", 00:05:53.243 "bdev_lvol_clone_bdev", 00:05:53.243 "bdev_lvol_clone", 00:05:53.243 "bdev_lvol_snapshot", 00:05:53.243 "bdev_lvol_create", 00:05:53.243 "bdev_lvol_delete_lvstore", 00:05:53.243 "bdev_lvol_rename_lvstore", 00:05:53.243 "bdev_lvol_create_lvstore", 00:05:53.243 "bdev_raid_set_options", 00:05:53.243 "bdev_raid_remove_base_bdev", 00:05:53.243 "bdev_raid_add_base_bdev", 00:05:53.243 "bdev_raid_delete", 00:05:53.243 "bdev_raid_create", 00:05:53.243 "bdev_raid_get_bdevs", 00:05:53.243 "bdev_error_inject_error", 00:05:53.243 "bdev_error_delete", 00:05:53.243 "bdev_error_create", 00:05:53.243 "bdev_split_delete", 00:05:53.243 "bdev_split_create", 00:05:53.243 "bdev_delay_delete", 00:05:53.243 "bdev_delay_create", 00:05:53.243 "bdev_delay_update_latency", 00:05:53.243 "bdev_zone_block_delete", 00:05:53.243 "bdev_zone_block_create", 00:05:53.243 "blobfs_create", 00:05:53.243 "blobfs_detect", 00:05:53.243 "blobfs_set_cache_size", 00:05:53.243 "bdev_aio_delete", 00:05:53.243 "bdev_aio_rescan", 00:05:53.243 "bdev_aio_create", 00:05:53.243 "bdev_ftl_set_property", 00:05:53.243 "bdev_ftl_get_properties", 00:05:53.243 "bdev_ftl_get_stats", 00:05:53.243 "bdev_ftl_unmap", 00:05:53.243 "bdev_ftl_unload", 00:05:53.243 "bdev_ftl_delete", 00:05:53.243 "bdev_ftl_load", 00:05:53.243 "bdev_ftl_create", 00:05:53.243 "bdev_virtio_attach_controller", 00:05:53.243 "bdev_virtio_scsi_get_devices", 00:05:53.243 "bdev_virtio_detach_controller", 00:05:53.243 "bdev_virtio_blk_set_hotplug", 00:05:53.243 "bdev_iscsi_delete", 00:05:53.243 "bdev_iscsi_create", 00:05:53.243 "bdev_iscsi_set_options", 00:05:53.243 "accel_error_inject_error", 00:05:53.243 "ioat_scan_accel_module", 00:05:53.243 "dsa_scan_accel_module", 00:05:53.243 "iaa_scan_accel_module", 00:05:53.243 "vfu_virtio_create_scsi_endpoint", 00:05:53.243 "vfu_virtio_scsi_remove_target", 00:05:53.243 "vfu_virtio_scsi_add_target", 00:05:53.243 "vfu_virtio_create_blk_endpoint", 00:05:53.243 "vfu_virtio_delete_endpoint", 00:05:53.243 "iscsi_set_options", 00:05:53.243 "iscsi_get_auth_groups", 00:05:53.243 "iscsi_auth_group_remove_secret", 00:05:53.243 "iscsi_auth_group_add_secret", 00:05:53.243 "iscsi_delete_auth_group", 00:05:53.243 "iscsi_create_auth_group", 00:05:53.243 "iscsi_set_discovery_auth", 00:05:53.243 "iscsi_get_options", 00:05:53.243 "iscsi_target_node_request_logout", 00:05:53.243 "iscsi_target_node_set_redirect", 00:05:53.243 "iscsi_target_node_set_auth", 00:05:53.243 "iscsi_target_node_add_lun", 00:05:53.243 "iscsi_get_connections", 00:05:53.243 "iscsi_portal_group_set_auth", 00:05:53.243 "iscsi_start_portal_group", 00:05:53.243 "iscsi_delete_portal_group", 00:05:53.243 "iscsi_create_portal_group", 00:05:53.243 "iscsi_get_portal_groups", 00:05:53.243 "iscsi_delete_target_node", 00:05:53.243 "iscsi_target_node_remove_pg_ig_maps", 00:05:53.243 "iscsi_target_node_add_pg_ig_maps", 00:05:53.243 "iscsi_create_target_node", 00:05:53.243 "iscsi_get_target_nodes", 00:05:53.243 "iscsi_delete_initiator_group", 00:05:53.243 "iscsi_initiator_group_remove_initiators", 00:05:53.243 "iscsi_initiator_group_add_initiators", 00:05:53.243 "iscsi_create_initiator_group", 00:05:53.243 "iscsi_get_initiator_groups", 00:05:53.243 "nvmf_set_crdt", 00:05:53.243 "nvmf_set_config", 00:05:53.243 "nvmf_set_max_subsystems", 00:05:53.243 "nvmf_subsystem_get_listeners", 00:05:53.243 "nvmf_subsystem_get_qpairs", 00:05:53.243 "nvmf_subsystem_get_controllers", 00:05:53.243 "nvmf_get_stats", 00:05:53.243 "nvmf_get_transports", 00:05:53.243 "nvmf_create_transport", 00:05:53.243 "nvmf_get_targets", 00:05:53.243 "nvmf_delete_target", 00:05:53.243 "nvmf_create_target", 00:05:53.243 "nvmf_subsystem_allow_any_host", 00:05:53.243 "nvmf_subsystem_remove_host", 00:05:53.243 "nvmf_subsystem_add_host", 00:05:53.243 "nvmf_subsystem_remove_ns", 00:05:53.243 "nvmf_subsystem_add_ns", 00:05:53.243 "nvmf_subsystem_listener_set_ana_state", 00:05:53.243 "nvmf_discovery_get_referrals", 00:05:53.243 "nvmf_discovery_remove_referral", 00:05:53.243 "nvmf_discovery_add_referral", 00:05:53.243 "nvmf_subsystem_remove_listener", 00:05:53.243 "nvmf_subsystem_add_listener", 00:05:53.243 "nvmf_delete_subsystem", 00:05:53.243 "nvmf_create_subsystem", 00:05:53.243 "nvmf_get_subsystems", 00:05:53.243 "env_dpdk_get_mem_stats", 00:05:53.243 "nbd_get_disks", 00:05:53.243 "nbd_stop_disk", 00:05:53.243 "nbd_start_disk", 00:05:53.243 "ublk_recover_disk", 00:05:53.243 "ublk_get_disks", 00:05:53.243 "ublk_stop_disk", 00:05:53.243 "ublk_start_disk", 00:05:53.243 "ublk_destroy_target", 00:05:53.243 "ublk_create_target", 00:05:53.243 "virtio_blk_create_transport", 00:05:53.243 "virtio_blk_get_transports", 00:05:53.243 "vhost_controller_set_coalescing", 00:05:53.243 "vhost_get_controllers", 00:05:53.243 "vhost_delete_controller", 00:05:53.243 "vhost_create_blk_controller", 00:05:53.243 "vhost_scsi_controller_remove_target", 00:05:53.243 "vhost_scsi_controller_add_target", 00:05:53.243 "vhost_start_scsi_controller", 00:05:53.243 "vhost_create_scsi_controller", 00:05:53.243 "thread_set_cpumask", 00:05:53.243 "framework_get_scheduler", 00:05:53.243 "framework_set_scheduler", 00:05:53.243 "framework_get_reactors", 00:05:53.243 "thread_get_io_channels", 00:05:53.243 "thread_get_pollers", 00:05:53.243 "thread_get_stats", 00:05:53.243 "framework_monitor_context_switch", 00:05:53.243 "spdk_kill_instance", 00:05:53.243 "log_enable_timestamps", 00:05:53.243 "log_get_flags", 00:05:53.243 "log_clear_flag", 00:05:53.243 "log_set_flag", 00:05:53.243 "log_get_level", 00:05:53.243 "log_set_level", 00:05:53.243 "log_get_print_level", 00:05:53.243 "log_set_print_level", 00:05:53.243 "framework_enable_cpumask_locks", 00:05:53.243 "framework_disable_cpumask_locks", 00:05:53.243 "framework_wait_init", 00:05:53.243 "framework_start_init", 00:05:53.243 "scsi_get_devices", 00:05:53.243 "bdev_get_histogram", 00:05:53.243 "bdev_enable_histogram", 00:05:53.243 "bdev_set_qos_limit", 00:05:53.243 "bdev_set_qd_sampling_period", 00:05:53.243 "bdev_get_bdevs", 00:05:53.243 "bdev_reset_iostat", 00:05:53.243 "bdev_get_iostat", 00:05:53.243 "bdev_examine", 00:05:53.244 "bdev_wait_for_examine", 00:05:53.244 "bdev_set_options", 00:05:53.244 "notify_get_notifications", 00:05:53.244 "notify_get_types", 00:05:53.244 "accel_get_stats", 00:05:53.244 "accel_set_options", 00:05:53.244 "accel_set_driver", 00:05:53.244 "accel_crypto_key_destroy", 00:05:53.244 "accel_crypto_keys_get", 00:05:53.244 "accel_crypto_key_create", 00:05:53.244 "accel_assign_opc", 00:05:53.244 "accel_get_module_info", 00:05:53.244 "accel_get_opc_assignments", 00:05:53.244 "vmd_rescan", 00:05:53.244 "vmd_remove_device", 00:05:53.244 "vmd_enable", 00:05:53.244 "sock_set_default_impl", 00:05:53.244 "sock_impl_set_options", 00:05:53.244 "sock_impl_get_options", 00:05:53.244 "iobuf_get_stats", 00:05:53.244 "iobuf_set_options", 00:05:53.244 "framework_get_pci_devices", 00:05:53.244 "framework_get_config", 00:05:53.244 "framework_get_subsystems", 00:05:53.244 "vfu_tgt_set_base_path", 00:05:53.244 "trace_get_info", 00:05:53.244 "trace_get_tpoint_group_mask", 00:05:53.244 "trace_disable_tpoint_group", 00:05:53.244 "trace_enable_tpoint_group", 00:05:53.244 "trace_clear_tpoint_mask", 00:05:53.244 "trace_set_tpoint_mask", 00:05:53.244 "spdk_get_version", 00:05:53.244 "rpc_get_methods" 00:05:53.244 ] 00:05:53.244 03:38:11 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:53.244 03:38:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.244 03:38:11 -- common/autotest_common.sh@10 -- # set +x 00:05:53.244 03:38:11 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.244 03:38:11 -- spdkcli/tcp.sh@38 -- # killprocess 2260423 00:05:53.244 03:38:11 -- common/autotest_common.sh@926 -- # '[' -z 2260423 ']' 00:05:53.244 03:38:11 -- common/autotest_common.sh@930 -- # kill -0 2260423 00:05:53.244 03:38:11 -- common/autotest_common.sh@931 -- # uname 00:05:53.244 03:38:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:53.244 03:38:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2260423 00:05:53.244 03:38:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:53.244 03:38:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:53.244 03:38:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2260423' 00:05:53.244 killing process with pid 2260423 00:05:53.244 03:38:12 -- common/autotest_common.sh@945 -- # kill 2260423 00:05:53.244 03:38:12 -- common/autotest_common.sh@950 -- # wait 2260423 00:05:53.504 00:05:53.504 real 0m1.706s 00:05:53.504 user 0m3.344s 00:05:53.504 sys 0m0.463s 00:05:53.504 03:38:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.504 03:38:12 -- common/autotest_common.sh@10 -- # set +x 00:05:53.504 ************************************ 00:05:53.504 END TEST spdkcli_tcp 00:05:53.504 ************************************ 00:05:53.504 03:38:12 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.504 03:38:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.504 03:38:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.504 03:38:12 -- common/autotest_common.sh@10 -- # set +x 00:05:53.504 ************************************ 00:05:53.504 START TEST dpdk_mem_utility 00:05:53.504 ************************************ 00:05:53.504 03:38:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.765 * Looking for test storage... 00:05:53.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:53.766 03:38:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.766 03:38:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2260751 00:05:53.766 03:38:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.766 03:38:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2260751 00:05:53.766 03:38:12 -- common/autotest_common.sh@819 -- # '[' -z 2260751 ']' 00:05:53.766 03:38:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.766 03:38:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.766 03:38:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.766 03:38:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.766 03:38:12 -- common/autotest_common.sh@10 -- # set +x 00:05:53.766 [2024-07-14 03:38:12.539521] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:53.766 [2024-07-14 03:38:12.539613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260751 ] 00:05:53.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.766 [2024-07-14 03:38:12.597047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.766 [2024-07-14 03:38:12.683413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.766 [2024-07-14 03:38:12.683584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.706 03:38:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.706 03:38:13 -- common/autotest_common.sh@852 -- # return 0 00:05:54.706 03:38:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.706 03:38:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.706 03:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.706 03:38:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.706 { 00:05:54.706 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.706 } 00:05:54.706 03:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.706 03:38:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.706 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:54.706 1 heaps totaling size 814.000000 MiB 00:05:54.706 size: 814.000000 MiB heap id: 0 00:05:54.706 end heaps---------- 00:05:54.706 8 mempools totaling size 598.116089 MiB 00:05:54.706 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.706 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.706 size: 84.521057 MiB name: bdev_io_2260751 00:05:54.706 size: 51.011292 MiB name: evtpool_2260751 00:05:54.706 size: 50.003479 MiB name: msgpool_2260751 00:05:54.706 size: 21.763794 MiB name: PDU_Pool 00:05:54.706 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.706 size: 0.026123 MiB name: Session_Pool 00:05:54.706 end mempools------- 00:05:54.706 6 memzones totaling size 4.142822 MiB 00:05:54.706 size: 1.000366 MiB name: RG_ring_0_2260751 00:05:54.706 size: 1.000366 MiB name: RG_ring_1_2260751 00:05:54.706 size: 1.000366 MiB name: RG_ring_4_2260751 00:05:54.706 size: 1.000366 MiB name: RG_ring_5_2260751 00:05:54.706 size: 0.125366 MiB name: RG_ring_2_2260751 00:05:54.706 size: 0.015991 MiB name: RG_ring_3_2260751 00:05:54.706 end memzones------- 00:05:54.706 03:38:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.706 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:54.706 list of free elements. size: 12.519348 MiB 00:05:54.706 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:54.706 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:54.706 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:54.706 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:54.706 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:54.706 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:54.706 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:54.706 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:54.706 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:54.706 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:54.706 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:54.706 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:54.706 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:54.706 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:54.706 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:54.706 list of standard malloc elements. size: 199.218079 MiB 00:05:54.706 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:54.706 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:54.706 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.706 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:54.706 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:54.706 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.706 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:54.706 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.706 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:54.706 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.706 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:54.706 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:54.706 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:54.706 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:54.706 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:54.706 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:54.706 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:54.706 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:54.706 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:54.706 list of memzone associated elements. size: 602.262573 MiB 00:05:54.706 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:54.706 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.706 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:54.706 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.706 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:54.706 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2260751_0 00:05:54.706 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:54.706 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2260751_0 00:05:54.706 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:54.706 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2260751_0 00:05:54.706 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:54.706 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.706 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:54.706 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.706 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:54.706 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2260751 00:05:54.706 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:54.706 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2260751 00:05:54.706 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.706 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2260751 00:05:54.706 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:54.706 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.706 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:54.706 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.706 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:54.706 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.706 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:54.706 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.706 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:54.706 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2260751 00:05:54.706 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:54.706 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2260751 00:05:54.706 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:54.706 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2260751 00:05:54.706 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:54.706 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2260751 00:05:54.706 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:54.706 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2260751 00:05:54.706 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:54.706 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.706 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:54.706 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.706 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:54.706 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.706 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:54.706 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2260751 00:05:54.706 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:54.706 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.706 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:54.706 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.706 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:54.706 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2260751 00:05:54.706 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:54.706 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.706 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:54.706 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2260751 00:05:54.706 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:54.706 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2260751 00:05:54.706 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:54.707 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.707 03:38:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.707 03:38:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2260751 00:05:54.707 03:38:13 -- common/autotest_common.sh@926 -- # '[' -z 2260751 ']' 00:05:54.707 03:38:13 -- common/autotest_common.sh@930 -- # kill -0 2260751 00:05:54.707 03:38:13 -- common/autotest_common.sh@931 -- # uname 00:05:54.707 03:38:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.707 03:38:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2260751 00:05:54.707 03:38:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.707 03:38:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.707 03:38:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2260751' 00:05:54.707 killing process with pid 2260751 00:05:54.707 03:38:13 -- common/autotest_common.sh@945 -- # kill 2260751 00:05:54.707 03:38:13 -- common/autotest_common.sh@950 -- # wait 2260751 00:05:55.273 00:05:55.273 real 0m1.574s 00:05:55.273 user 0m1.732s 00:05:55.273 sys 0m0.438s 00:05:55.273 03:38:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.273 03:38:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.273 ************************************ 00:05:55.273 END TEST dpdk_mem_utility 00:05:55.273 ************************************ 00:05:55.273 03:38:14 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:55.273 03:38:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.273 03:38:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.273 03:38:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.273 ************************************ 00:05:55.273 START TEST event 00:05:55.273 ************************************ 00:05:55.273 03:38:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:55.273 * Looking for test storage... 00:05:55.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.273 03:38:14 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:55.273 03:38:14 -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.273 03:38:14 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.273 03:38:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:55.273 03:38:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.273 03:38:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.273 ************************************ 00:05:55.273 START TEST event_perf 00:05:55.273 ************************************ 00:05:55.273 03:38:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.273 Running I/O for 1 seconds...[2024-07-14 03:38:14.107851] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:55.273 [2024-07-14 03:38:14.107958] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260952 ] 00:05:55.273 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.273 [2024-07-14 03:38:14.169333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.532 [2024-07-14 03:38:14.260044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.532 [2024-07-14 03:38:14.260101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.532 [2024-07-14 03:38:14.260219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.532 [2024-07-14 03:38:14.260222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.469 Running I/O for 1 seconds... 00:05:56.469 lcore 0: 230017 00:05:56.469 lcore 1: 230017 00:05:56.469 lcore 2: 230017 00:05:56.469 lcore 3: 230017 00:05:56.469 done. 00:05:56.469 00:05:56.469 real 0m1.250s 00:05:56.469 user 0m4.165s 00:05:56.469 sys 0m0.080s 00:05:56.469 03:38:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.469 03:38:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 ************************************ 00:05:56.469 END TEST event_perf 00:05:56.469 ************************************ 00:05:56.469 03:38:15 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.469 03:38:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:56.469 03:38:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.469 03:38:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 ************************************ 00:05:56.469 START TEST event_reactor 00:05:56.469 ************************************ 00:05:56.469 03:38:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.469 [2024-07-14 03:38:15.382444] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.469 [2024-07-14 03:38:15.382527] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261111 ] 00:05:56.728 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.728 [2024-07-14 03:38:15.445083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.728 [2024-07-14 03:38:15.535408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.108 test_start 00:05:58.108 oneshot 00:05:58.108 tick 100 00:05:58.108 tick 100 00:05:58.108 tick 250 00:05:58.108 tick 100 00:05:58.108 tick 100 00:05:58.108 tick 250 00:05:58.108 tick 100 00:05:58.108 tick 500 00:05:58.108 tick 100 00:05:58.108 tick 100 00:05:58.108 tick 250 00:05:58.108 tick 100 00:05:58.108 tick 100 00:05:58.108 test_end 00:05:58.108 00:05:58.108 real 0m1.249s 00:05:58.108 user 0m1.163s 00:05:58.108 sys 0m0.081s 00:05:58.108 03:38:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.108 03:38:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.108 ************************************ 00:05:58.108 END TEST event_reactor 00:05:58.108 ************************************ 00:05:58.108 03:38:16 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.108 03:38:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:58.108 03:38:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.108 03:38:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.108 ************************************ 00:05:58.108 START TEST event_reactor_perf 00:05:58.108 ************************************ 00:05:58.108 03:38:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.108 [2024-07-14 03:38:16.655524] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:58.108 [2024-07-14 03:38:16.655605] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261272 ] 00:05:58.108 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.108 [2024-07-14 03:38:16.717180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.108 [2024-07-14 03:38:16.807520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.045 test_start 00:05:59.045 test_end 00:05:59.045 Performance: 352598 events per second 00:05:59.045 00:05:59.045 real 0m1.248s 00:05:59.045 user 0m1.157s 00:05:59.045 sys 0m0.086s 00:05:59.045 03:38:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.045 03:38:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.045 ************************************ 00:05:59.045 END TEST event_reactor_perf 00:05:59.045 ************************************ 00:05:59.045 03:38:17 -- event/event.sh@49 -- # uname -s 00:05:59.045 03:38:17 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:59.045 03:38:17 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.045 03:38:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.045 03:38:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.045 03:38:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.045 ************************************ 00:05:59.045 START TEST event_scheduler 00:05:59.045 ************************************ 00:05:59.045 03:38:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.045 * Looking for test storage... 00:05:59.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:59.045 03:38:17 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.045 03:38:17 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2261451 00:05:59.045 03:38:17 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.045 03:38:17 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.045 03:38:17 -- scheduler/scheduler.sh@37 -- # waitforlisten 2261451 00:05:59.045 03:38:17 -- common/autotest_common.sh@819 -- # '[' -z 2261451 ']' 00:05:59.045 03:38:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.045 03:38:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.045 03:38:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.045 03:38:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.045 03:38:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.303 [2024-07-14 03:38:18.005263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:59.303 [2024-07-14 03:38:18.005350] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261451 ] 00:05:59.303 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.303 [2024-07-14 03:38:18.066822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.303 [2024-07-14 03:38:18.154421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.303 [2024-07-14 03:38:18.154478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.303 [2024-07-14 03:38:18.154544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.303 [2024-07-14 03:38:18.154547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.303 03:38:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.303 03:38:18 -- common/autotest_common.sh@852 -- # return 0 00:05:59.303 03:38:18 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.303 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.303 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.303 POWER: Env isn't set yet! 00:05:59.303 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:59.303 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:59.303 POWER: Cannot get available frequencies of lcore 0 00:05:59.303 POWER: Attempting to initialise PSTAT power management... 00:05:59.303 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:59.303 POWER: Initialized successfully for lcore 0 power management 00:05:59.303 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:59.303 POWER: Initialized successfully for lcore 1 power management 00:05:59.303 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:59.303 POWER: Initialized successfully for lcore 2 power management 00:05:59.562 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:59.562 POWER: Initialized successfully for lcore 3 power management 00:05:59.562 [2024-07-14 03:38:18.248098] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.562 [2024-07-14 03:38:18.248126] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.562 [2024-07-14 03:38:18.248145] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 [2024-07-14 03:38:18.346836] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.562 03:38:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.562 03:38:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 ************************************ 00:05:59.562 START TEST scheduler_create_thread 00:05:59.562 ************************************ 00:05:59.562 03:38:18 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 2 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 3 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 4 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 5 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 6 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 7 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 8 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 9 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 10 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.562 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.562 03:38:18 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.562 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.562 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:06:00.130 03:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:00.130 03:38:18 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:00.130 03:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:00.130 03:38:18 -- common/autotest_common.sh@10 -- # set +x 00:06:01.531 03:38:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.531 03:38:20 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.531 03:38:20 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.531 03:38:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.531 03:38:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.908 03:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.908 00:06:02.908 real 0m3.098s 00:06:02.908 user 0m0.011s 00:06:02.908 sys 0m0.004s 00:06:02.908 03:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.908 03:38:21 -- common/autotest_common.sh@10 -- # set +x 00:06:02.908 ************************************ 00:06:02.908 END TEST scheduler_create_thread 00:06:02.908 ************************************ 00:06:02.908 03:38:21 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.908 03:38:21 -- scheduler/scheduler.sh@46 -- # killprocess 2261451 00:06:02.908 03:38:21 -- common/autotest_common.sh@926 -- # '[' -z 2261451 ']' 00:06:02.908 03:38:21 -- common/autotest_common.sh@930 -- # kill -0 2261451 00:06:02.908 03:38:21 -- common/autotest_common.sh@931 -- # uname 00:06:02.908 03:38:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:02.908 03:38:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2261451 00:06:02.908 03:38:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:02.908 03:38:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:02.908 03:38:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2261451' 00:06:02.908 killing process with pid 2261451 00:06:02.908 03:38:21 -- common/autotest_common.sh@945 -- # kill 2261451 00:06:02.908 03:38:21 -- common/autotest_common.sh@950 -- # wait 2261451 00:06:02.908 [2024-07-14 03:38:21.831191] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:03.166 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:03.166 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:03.166 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:03.166 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:03.166 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:03.166 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:03.167 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:03.167 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:03.167 00:06:03.167 real 0m4.161s 00:06:03.167 user 0m6.813s 00:06:03.167 sys 0m0.293s 00:06:03.167 03:38:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.167 03:38:22 -- common/autotest_common.sh@10 -- # set +x 00:06:03.167 ************************************ 00:06:03.167 END TEST event_scheduler 00:06:03.167 ************************************ 00:06:03.167 03:38:22 -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.426 03:38:22 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.426 03:38:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.426 03:38:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.426 03:38:22 -- common/autotest_common.sh@10 -- # set +x 00:06:03.426 ************************************ 00:06:03.426 START TEST app_repeat 00:06:03.426 ************************************ 00:06:03.426 03:38:22 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:03.426 03:38:22 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.426 03:38:22 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.426 03:38:22 -- event/event.sh@13 -- # local nbd_list 00:06:03.426 03:38:22 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.426 03:38:22 -- event/event.sh@14 -- # local bdev_list 00:06:03.426 03:38:22 -- event/event.sh@15 -- # local repeat_times=4 00:06:03.426 03:38:22 -- event/event.sh@17 -- # modprobe nbd 00:06:03.426 03:38:22 -- event/event.sh@19 -- # repeat_pid=2262046 00:06:03.426 03:38:22 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.426 03:38:22 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.426 03:38:22 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2262046' 00:06:03.426 Process app_repeat pid: 2262046 00:06:03.426 03:38:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.426 03:38:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.426 spdk_app_start Round 0 00:06:03.426 03:38:22 -- event/event.sh@25 -- # waitforlisten 2262046 /var/tmp/spdk-nbd.sock 00:06:03.426 03:38:22 -- common/autotest_common.sh@819 -- # '[' -z 2262046 ']' 00:06:03.426 03:38:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.426 03:38:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.426 03:38:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.426 03:38:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.426 03:38:22 -- common/autotest_common.sh@10 -- # set +x 00:06:03.426 [2024-07-14 03:38:22.131658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:03.426 [2024-07-14 03:38:22.131726] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262046 ] 00:06:03.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.426 [2024-07-14 03:38:22.190797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.426 [2024-07-14 03:38:22.278497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.426 [2024-07-14 03:38:22.278501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.355 03:38:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.355 03:38:23 -- common/autotest_common.sh@852 -- # return 0 00:06:04.355 03:38:23 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.612 Malloc0 00:06:04.612 03:38:23 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.870 Malloc1 00:06:04.870 03:38:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.870 03:38:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.870 03:38:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.870 03:38:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.870 03:38:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.870 03:38:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@12 -- # local i 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.871 03:38:23 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.129 /dev/nbd0 00:06:05.129 03:38:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.129 03:38:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.129 03:38:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:05.129 03:38:23 -- common/autotest_common.sh@857 -- # local i 00:06:05.129 03:38:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:05.129 03:38:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:05.129 03:38:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:05.129 03:38:23 -- common/autotest_common.sh@861 -- # break 00:06:05.129 03:38:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:05.129 03:38:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:05.129 03:38:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.129 1+0 records in 00:06:05.129 1+0 records out 00:06:05.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139436 s, 29.4 MB/s 00:06:05.129 03:38:23 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.129 03:38:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:05.129 03:38:23 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.129 03:38:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:05.129 03:38:23 -- common/autotest_common.sh@877 -- # return 0 00:06:05.129 03:38:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.129 03:38:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.129 03:38:23 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.388 /dev/nbd1 00:06:05.388 03:38:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.388 03:38:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.388 03:38:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:05.388 03:38:24 -- common/autotest_common.sh@857 -- # local i 00:06:05.388 03:38:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:05.388 03:38:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:05.388 03:38:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:05.388 03:38:24 -- common/autotest_common.sh@861 -- # break 00:06:05.388 03:38:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:05.388 03:38:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:05.388 03:38:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.388 1+0 records in 00:06:05.388 1+0 records out 00:06:05.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232887 s, 17.6 MB/s 00:06:05.388 03:38:24 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.388 03:38:24 -- common/autotest_common.sh@874 -- # size=4096 00:06:05.388 03:38:24 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.388 03:38:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:05.388 03:38:24 -- common/autotest_common.sh@877 -- # return 0 00:06:05.388 03:38:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.388 03:38:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.388 03:38:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.388 03:38:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.388 03:38:24 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.646 { 00:06:05.646 "nbd_device": "/dev/nbd0", 00:06:05.646 "bdev_name": "Malloc0" 00:06:05.646 }, 00:06:05.646 { 00:06:05.646 "nbd_device": "/dev/nbd1", 00:06:05.646 "bdev_name": "Malloc1" 00:06:05.646 } 00:06:05.646 ]' 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.646 { 00:06:05.646 "nbd_device": "/dev/nbd0", 00:06:05.646 "bdev_name": "Malloc0" 00:06:05.646 }, 00:06:05.646 { 00:06:05.646 "nbd_device": "/dev/nbd1", 00:06:05.646 "bdev_name": "Malloc1" 00:06:05.646 } 00:06:05.646 ]' 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.646 /dev/nbd1' 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.646 /dev/nbd1' 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.646 256+0 records in 00:06:05.646 256+0 records out 00:06:05.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477055 s, 220 MB/s 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.646 256+0 records in 00:06:05.646 256+0 records out 00:06:05.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020446 s, 51.3 MB/s 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.646 256+0 records in 00:06:05.646 256+0 records out 00:06:05.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217882 s, 48.1 MB/s 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.646 03:38:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@51 -- # local i 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.647 03:38:24 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@41 -- # break 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.905 03:38:24 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@41 -- # break 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.164 03:38:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@65 -- # true 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.422 03:38:25 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.422 03:38:25 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.682 03:38:25 -- event/event.sh@35 -- # sleep 3 00:06:06.941 [2024-07-14 03:38:25.834331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.201 [2024-07-14 03:38:25.926490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.201 [2024-07-14 03:38:25.926491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.201 [2024-07-14 03:38:25.986399] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.201 [2024-07-14 03:38:25.986463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.735 03:38:28 -- event/event.sh@23 -- # for i in {0..2} 00:06:09.735 03:38:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.735 spdk_app_start Round 1 00:06:09.735 03:38:28 -- event/event.sh@25 -- # waitforlisten 2262046 /var/tmp/spdk-nbd.sock 00:06:09.735 03:38:28 -- common/autotest_common.sh@819 -- # '[' -z 2262046 ']' 00:06:09.735 03:38:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.735 03:38:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.735 03:38:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.735 03:38:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.735 03:38:28 -- common/autotest_common.sh@10 -- # set +x 00:06:09.994 03:38:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.994 03:38:28 -- common/autotest_common.sh@852 -- # return 0 00:06:09.994 03:38:28 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.252 Malloc0 00:06:10.252 03:38:29 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.511 Malloc1 00:06:10.511 03:38:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.511 03:38:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.770 /dev/nbd0 00:06:10.770 03:38:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.770 03:38:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.770 03:38:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:10.770 03:38:29 -- common/autotest_common.sh@857 -- # local i 00:06:10.770 03:38:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:10.770 03:38:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:10.770 03:38:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:10.770 03:38:29 -- common/autotest_common.sh@861 -- # break 00:06:10.770 03:38:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:10.770 03:38:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:10.770 03:38:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.770 1+0 records in 00:06:10.770 1+0 records out 00:06:10.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015763 s, 26.0 MB/s 00:06:10.770 03:38:29 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.770 03:38:29 -- common/autotest_common.sh@874 -- # size=4096 00:06:10.770 03:38:29 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.770 03:38:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:10.770 03:38:29 -- common/autotest_common.sh@877 -- # return 0 00:06:10.770 03:38:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.770 03:38:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.770 03:38:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.029 /dev/nbd1 00:06:11.029 03:38:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.029 03:38:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.029 03:38:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:11.029 03:38:29 -- common/autotest_common.sh@857 -- # local i 00:06:11.029 03:38:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:11.029 03:38:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:11.029 03:38:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:11.029 03:38:29 -- common/autotest_common.sh@861 -- # break 00:06:11.029 03:38:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:11.029 03:38:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:11.029 03:38:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.029 1+0 records in 00:06:11.029 1+0 records out 00:06:11.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215787 s, 19.0 MB/s 00:06:11.029 03:38:29 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.029 03:38:29 -- common/autotest_common.sh@874 -- # size=4096 00:06:11.029 03:38:29 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.029 03:38:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:11.029 03:38:29 -- common/autotest_common.sh@877 -- # return 0 00:06:11.029 03:38:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.029 03:38:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.029 03:38:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.029 03:38:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.029 03:38:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.288 { 00:06:11.288 "nbd_device": "/dev/nbd0", 00:06:11.288 "bdev_name": "Malloc0" 00:06:11.288 }, 00:06:11.288 { 00:06:11.288 "nbd_device": "/dev/nbd1", 00:06:11.288 "bdev_name": "Malloc1" 00:06:11.288 } 00:06:11.288 ]' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.288 { 00:06:11.288 "nbd_device": "/dev/nbd0", 00:06:11.288 "bdev_name": "Malloc0" 00:06:11.288 }, 00:06:11.288 { 00:06:11.288 "nbd_device": "/dev/nbd1", 00:06:11.288 "bdev_name": "Malloc1" 00:06:11.288 } 00:06:11.288 ]' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.288 /dev/nbd1' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.288 /dev/nbd1' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.288 256+0 records in 00:06:11.288 256+0 records out 00:06:11.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503497 s, 208 MB/s 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.288 256+0 records in 00:06:11.288 256+0 records out 00:06:11.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203108 s, 51.6 MB/s 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.288 256+0 records in 00:06:11.288 256+0 records out 00:06:11.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225262 s, 46.5 MB/s 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.288 03:38:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.547 03:38:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@41 -- # break 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.805 03:38:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@41 -- # break 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.064 03:38:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@65 -- # true 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.323 03:38:31 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.323 03:38:31 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.581 03:38:31 -- event/event.sh@35 -- # sleep 3 00:06:12.839 [2024-07-14 03:38:31.528328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.839 [2024-07-14 03:38:31.618923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.839 [2024-07-14 03:38:31.618928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.839 [2024-07-14 03:38:31.674279] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.839 [2024-07-14 03:38:31.674358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.362 03:38:34 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.362 03:38:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.362 spdk_app_start Round 2 00:06:15.362 03:38:34 -- event/event.sh@25 -- # waitforlisten 2262046 /var/tmp/spdk-nbd.sock 00:06:15.362 03:38:34 -- common/autotest_common.sh@819 -- # '[' -z 2262046 ']' 00:06:15.362 03:38:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.362 03:38:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.362 03:38:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.362 03:38:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.362 03:38:34 -- common/autotest_common.sh@10 -- # set +x 00:06:15.619 03:38:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.619 03:38:34 -- common/autotest_common.sh@852 -- # return 0 00:06:15.619 03:38:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.877 Malloc0 00:06:15.877 03:38:34 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.473 Malloc1 00:06:16.474 03:38:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@12 -- # local i 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.474 /dev/nbd0 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.474 03:38:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:16.474 03:38:35 -- common/autotest_common.sh@857 -- # local i 00:06:16.474 03:38:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.474 03:38:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.474 03:38:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:16.474 03:38:35 -- common/autotest_common.sh@861 -- # break 00:06:16.474 03:38:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.474 03:38:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.474 03:38:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.474 1+0 records in 00:06:16.474 1+0 records out 00:06:16.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000140143 s, 29.2 MB/s 00:06:16.474 03:38:35 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.474 03:38:35 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.474 03:38:35 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.474 03:38:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.474 03:38:35 -- common/autotest_common.sh@877 -- # return 0 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.474 03:38:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.731 /dev/nbd1 00:06:16.731 03:38:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.731 03:38:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.731 03:38:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:16.731 03:38:35 -- common/autotest_common.sh@857 -- # local i 00:06:16.731 03:38:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.731 03:38:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.731 03:38:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:16.731 03:38:35 -- common/autotest_common.sh@861 -- # break 00:06:16.731 03:38:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.731 03:38:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.731 03:38:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.731 1+0 records in 00:06:16.731 1+0 records out 00:06:16.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198734 s, 20.6 MB/s 00:06:16.731 03:38:35 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.731 03:38:35 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.731 03:38:35 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.731 03:38:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.731 03:38:35 -- common/autotest_common.sh@877 -- # return 0 00:06:16.731 03:38:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.731 03:38:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.732 03:38:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.732 03:38:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.732 03:38:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.989 { 00:06:16.989 "nbd_device": "/dev/nbd0", 00:06:16.989 "bdev_name": "Malloc0" 00:06:16.989 }, 00:06:16.989 { 00:06:16.989 "nbd_device": "/dev/nbd1", 00:06:16.989 "bdev_name": "Malloc1" 00:06:16.989 } 00:06:16.989 ]' 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.989 { 00:06:16.989 "nbd_device": "/dev/nbd0", 00:06:16.989 "bdev_name": "Malloc0" 00:06:16.989 }, 00:06:16.989 { 00:06:16.989 "nbd_device": "/dev/nbd1", 00:06:16.989 "bdev_name": "Malloc1" 00:06:16.989 } 00:06:16.989 ]' 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.989 /dev/nbd1' 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.989 /dev/nbd1' 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.989 256+0 records in 00:06:16.989 256+0 records out 00:06:16.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506007 s, 207 MB/s 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.989 03:38:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.246 256+0 records in 00:06:17.246 256+0 records out 00:06:17.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216477 s, 48.4 MB/s 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.246 256+0 records in 00:06:17.246 256+0 records out 00:06:17.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248096 s, 42.3 MB/s 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.246 03:38:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@41 -- # break 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.503 03:38:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@41 -- # break 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.760 03:38:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@65 -- # true 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.018 03:38:36 -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.018 03:38:36 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.276 03:38:37 -- event/event.sh@35 -- # sleep 3 00:06:18.534 [2024-07-14 03:38:37.269403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.534 [2024-07-14 03:38:37.356482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.534 [2024-07-14 03:38:37.356486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.534 [2024-07-14 03:38:37.417731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.534 [2024-07-14 03:38:37.417811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.813 03:38:40 -- event/event.sh@38 -- # waitforlisten 2262046 /var/tmp/spdk-nbd.sock 00:06:21.813 03:38:40 -- common/autotest_common.sh@819 -- # '[' -z 2262046 ']' 00:06:21.813 03:38:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.813 03:38:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.813 03:38:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.813 03:38:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.813 03:38:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.813 03:38:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.813 03:38:40 -- common/autotest_common.sh@852 -- # return 0 00:06:21.813 03:38:40 -- event/event.sh@39 -- # killprocess 2262046 00:06:21.813 03:38:40 -- common/autotest_common.sh@926 -- # '[' -z 2262046 ']' 00:06:21.813 03:38:40 -- common/autotest_common.sh@930 -- # kill -0 2262046 00:06:21.813 03:38:40 -- common/autotest_common.sh@931 -- # uname 00:06:21.813 03:38:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:21.813 03:38:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2262046 00:06:21.813 03:38:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:21.813 03:38:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:21.813 03:38:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2262046' 00:06:21.813 killing process with pid 2262046 00:06:21.813 03:38:40 -- common/autotest_common.sh@945 -- # kill 2262046 00:06:21.813 03:38:40 -- common/autotest_common.sh@950 -- # wait 2262046 00:06:21.813 spdk_app_start is called in Round 0. 00:06:21.813 Shutdown signal received, stop current app iteration 00:06:21.813 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:21.813 spdk_app_start is called in Round 1. 00:06:21.813 Shutdown signal received, stop current app iteration 00:06:21.813 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:21.813 spdk_app_start is called in Round 2. 00:06:21.813 Shutdown signal received, stop current app iteration 00:06:21.813 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:21.813 spdk_app_start is called in Round 3. 00:06:21.813 Shutdown signal received, stop current app iteration 00:06:21.813 03:38:40 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.813 03:38:40 -- event/event.sh@42 -- # return 0 00:06:21.813 00:06:21.813 real 0m18.409s 00:06:21.813 user 0m40.124s 00:06:21.813 sys 0m3.113s 00:06:21.813 03:38:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.813 03:38:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.813 ************************************ 00:06:21.813 END TEST app_repeat 00:06:21.813 ************************************ 00:06:21.813 03:38:40 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.813 03:38:40 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.813 03:38:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.813 03:38:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.813 03:38:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.813 ************************************ 00:06:21.813 START TEST cpu_locks 00:06:21.813 ************************************ 00:06:21.813 03:38:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:21.813 * Looking for test storage... 00:06:21.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:21.813 03:38:40 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.813 03:38:40 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.813 03:38:40 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.813 03:38:40 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.813 03:38:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.813 03:38:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.813 03:38:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.813 ************************************ 00:06:21.813 START TEST default_locks 00:06:21.813 ************************************ 00:06:21.813 03:38:40 -- common/autotest_common.sh@1104 -- # default_locks 00:06:21.813 03:38:40 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2264579 00:06:21.813 03:38:40 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.813 03:38:40 -- event/cpu_locks.sh@47 -- # waitforlisten 2264579 00:06:21.813 03:38:40 -- common/autotest_common.sh@819 -- # '[' -z 2264579 ']' 00:06:21.813 03:38:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.813 03:38:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.813 03:38:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.813 03:38:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.813 03:38:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.813 [2024-07-14 03:38:40.644311] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.813 [2024-07-14 03:38:40.644398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264579 ] 00:06:21.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.813 [2024-07-14 03:38:40.707501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.072 [2024-07-14 03:38:40.789656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.072 [2024-07-14 03:38:40.789823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.637 03:38:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.637 03:38:41 -- common/autotest_common.sh@852 -- # return 0 00:06:22.637 03:38:41 -- event/cpu_locks.sh@49 -- # locks_exist 2264579 00:06:22.895 03:38:41 -- event/cpu_locks.sh@22 -- # lslocks -p 2264579 00:06:22.895 03:38:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.895 lslocks: write error 00:06:22.895 03:38:41 -- event/cpu_locks.sh@50 -- # killprocess 2264579 00:06:22.895 03:38:41 -- common/autotest_common.sh@926 -- # '[' -z 2264579 ']' 00:06:22.895 03:38:41 -- common/autotest_common.sh@930 -- # kill -0 2264579 00:06:22.895 03:38:41 -- common/autotest_common.sh@931 -- # uname 00:06:22.895 03:38:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.895 03:38:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2264579 00:06:22.895 03:38:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.895 03:38:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.895 03:38:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2264579' 00:06:22.895 killing process with pid 2264579 00:06:22.895 03:38:41 -- common/autotest_common.sh@945 -- # kill 2264579 00:06:22.895 03:38:41 -- common/autotest_common.sh@950 -- # wait 2264579 00:06:23.461 03:38:42 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2264579 00:06:23.461 03:38:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:23.461 03:38:42 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2264579 00:06:23.461 03:38:42 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:23.461 03:38:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:23.461 03:38:42 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:23.461 03:38:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:23.461 03:38:42 -- common/autotest_common.sh@643 -- # waitforlisten 2264579 00:06:23.461 03:38:42 -- common/autotest_common.sh@819 -- # '[' -z 2264579 ']' 00:06:23.461 03:38:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.461 03:38:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.461 03:38:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.461 03:38:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.461 03:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:23.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2264579) - No such process 00:06:23.461 ERROR: process (pid: 2264579) is no longer running 00:06:23.461 03:38:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.461 03:38:42 -- common/autotest_common.sh@852 -- # return 1 00:06:23.461 03:38:42 -- common/autotest_common.sh@643 -- # es=1 00:06:23.461 03:38:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:23.461 03:38:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:23.461 03:38:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:23.461 03:38:42 -- event/cpu_locks.sh@54 -- # no_locks 00:06:23.461 03:38:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.461 03:38:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.461 03:38:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.461 00:06:23.461 real 0m1.641s 00:06:23.461 user 0m1.744s 00:06:23.461 sys 0m0.560s 00:06:23.461 03:38:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.461 03:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:23.461 ************************************ 00:06:23.461 END TEST default_locks 00:06:23.461 ************************************ 00:06:23.461 03:38:42 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:23.461 03:38:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.461 03:38:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.461 03:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:23.461 ************************************ 00:06:23.461 START TEST default_locks_via_rpc 00:06:23.461 ************************************ 00:06:23.461 03:38:42 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:23.461 03:38:42 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2264769 00:06:23.461 03:38:42 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.461 03:38:42 -- event/cpu_locks.sh@63 -- # waitforlisten 2264769 00:06:23.461 03:38:42 -- common/autotest_common.sh@819 -- # '[' -z 2264769 ']' 00:06:23.461 03:38:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.461 03:38:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.461 03:38:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.461 03:38:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.461 03:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:23.461 [2024-07-14 03:38:42.310619] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:23.461 [2024-07-14 03:38:42.310714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264769 ] 00:06:23.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.461 [2024-07-14 03:38:42.366861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.720 [2024-07-14 03:38:42.454335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.720 [2024-07-14 03:38:42.454505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.653 03:38:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.653 03:38:43 -- common/autotest_common.sh@852 -- # return 0 00:06:24.653 03:38:43 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:24.653 03:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.653 03:38:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.653 03:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.653 03:38:43 -- event/cpu_locks.sh@67 -- # no_locks 00:06:24.653 03:38:43 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.653 03:38:43 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.653 03:38:43 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.653 03:38:43 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.653 03:38:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:24.653 03:38:43 -- common/autotest_common.sh@10 -- # set +x 00:06:24.653 03:38:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:24.653 03:38:43 -- event/cpu_locks.sh@71 -- # locks_exist 2264769 00:06:24.653 03:38:43 -- event/cpu_locks.sh@22 -- # lslocks -p 2264769 00:06:24.653 03:38:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.653 03:38:43 -- event/cpu_locks.sh@73 -- # killprocess 2264769 00:06:24.653 03:38:43 -- common/autotest_common.sh@926 -- # '[' -z 2264769 ']' 00:06:24.653 03:38:43 -- common/autotest_common.sh@930 -- # kill -0 2264769 00:06:24.653 03:38:43 -- common/autotest_common.sh@931 -- # uname 00:06:24.653 03:38:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.653 03:38:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2264769 00:06:24.910 03:38:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.910 03:38:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.910 03:38:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2264769' 00:06:24.910 killing process with pid 2264769 00:06:24.911 03:38:43 -- common/autotest_common.sh@945 -- # kill 2264769 00:06:24.911 03:38:43 -- common/autotest_common.sh@950 -- # wait 2264769 00:06:25.169 00:06:25.169 real 0m1.740s 00:06:25.169 user 0m1.873s 00:06:25.169 sys 0m0.555s 00:06:25.169 03:38:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.169 03:38:44 -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 ************************************ 00:06:25.169 END TEST default_locks_via_rpc 00:06:25.169 ************************************ 00:06:25.169 03:38:44 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:25.169 03:38:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.169 03:38:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.169 03:38:44 -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 ************************************ 00:06:25.169 START TEST non_locking_app_on_locked_coremask 00:06:25.169 ************************************ 00:06:25.169 03:38:44 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:25.169 03:38:44 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2265069 00:06:25.169 03:38:44 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.169 03:38:44 -- event/cpu_locks.sh@81 -- # waitforlisten 2265069 /var/tmp/spdk.sock 00:06:25.169 03:38:44 -- common/autotest_common.sh@819 -- # '[' -z 2265069 ']' 00:06:25.169 03:38:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.169 03:38:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.169 03:38:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.169 03:38:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.169 03:38:44 -- common/autotest_common.sh@10 -- # set +x 00:06:25.169 [2024-07-14 03:38:44.081072] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:25.169 [2024-07-14 03:38:44.081174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265069 ] 00:06:25.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.428 [2024-07-14 03:38:44.140439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.428 [2024-07-14 03:38:44.225931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.428 [2024-07-14 03:38:44.226099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.362 03:38:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.362 03:38:45 -- common/autotest_common.sh@852 -- # return 0 00:06:26.362 03:38:45 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2265118 00:06:26.362 03:38:45 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:26.362 03:38:45 -- event/cpu_locks.sh@85 -- # waitforlisten 2265118 /var/tmp/spdk2.sock 00:06:26.362 03:38:45 -- common/autotest_common.sh@819 -- # '[' -z 2265118 ']' 00:06:26.362 03:38:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.362 03:38:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.362 03:38:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.362 03:38:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.362 03:38:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.362 [2024-07-14 03:38:45.060877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:26.362 [2024-07-14 03:38:45.060961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265118 ] 00:06:26.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.362 [2024-07-14 03:38:45.157193] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.362 [2024-07-14 03:38:45.157227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.621 [2024-07-14 03:38:45.340337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.621 [2024-07-14 03:38:45.340513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.186 03:38:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.186 03:38:45 -- common/autotest_common.sh@852 -- # return 0 00:06:27.186 03:38:45 -- event/cpu_locks.sh@87 -- # locks_exist 2265069 00:06:27.186 03:38:45 -- event/cpu_locks.sh@22 -- # lslocks -p 2265069 00:06:27.186 03:38:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.755 lslocks: write error 00:06:27.755 03:38:46 -- event/cpu_locks.sh@89 -- # killprocess 2265069 00:06:27.755 03:38:46 -- common/autotest_common.sh@926 -- # '[' -z 2265069 ']' 00:06:27.755 03:38:46 -- common/autotest_common.sh@930 -- # kill -0 2265069 00:06:27.755 03:38:46 -- common/autotest_common.sh@931 -- # uname 00:06:27.755 03:38:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.755 03:38:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2265069 00:06:27.755 03:38:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.755 03:38:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.755 03:38:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2265069' 00:06:27.755 killing process with pid 2265069 00:06:27.755 03:38:46 -- common/autotest_common.sh@945 -- # kill 2265069 00:06:27.755 03:38:46 -- common/autotest_common.sh@950 -- # wait 2265069 00:06:28.695 03:38:47 -- event/cpu_locks.sh@90 -- # killprocess 2265118 00:06:28.695 03:38:47 -- common/autotest_common.sh@926 -- # '[' -z 2265118 ']' 00:06:28.695 03:38:47 -- common/autotest_common.sh@930 -- # kill -0 2265118 00:06:28.695 03:38:47 -- common/autotest_common.sh@931 -- # uname 00:06:28.695 03:38:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.695 03:38:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2265118 00:06:28.695 03:38:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.695 03:38:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.695 03:38:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2265118' 00:06:28.695 killing process with pid 2265118 00:06:28.695 03:38:47 -- common/autotest_common.sh@945 -- # kill 2265118 00:06:28.695 03:38:47 -- common/autotest_common.sh@950 -- # wait 2265118 00:06:28.954 00:06:28.954 real 0m3.734s 00:06:28.954 user 0m4.011s 00:06:28.954 sys 0m1.112s 00:06:28.954 03:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.954 03:38:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.954 ************************************ 00:06:28.954 END TEST non_locking_app_on_locked_coremask 00:06:28.954 ************************************ 00:06:28.954 03:38:47 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.954 03:38:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.954 03:38:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.954 03:38:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.954 ************************************ 00:06:28.954 START TEST locking_app_on_unlocked_coremask 00:06:28.954 ************************************ 00:06:28.954 03:38:47 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:28.954 03:38:47 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2265522 00:06:28.954 03:38:47 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.954 03:38:47 -- event/cpu_locks.sh@99 -- # waitforlisten 2265522 /var/tmp/spdk.sock 00:06:28.954 03:38:47 -- common/autotest_common.sh@819 -- # '[' -z 2265522 ']' 00:06:28.954 03:38:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.954 03:38:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.954 03:38:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.954 03:38:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.954 03:38:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.954 [2024-07-14 03:38:47.844620] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:28.954 [2024-07-14 03:38:47.844712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265522 ] 00:06:28.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.214 [2024-07-14 03:38:47.908008] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.214 [2024-07-14 03:38:47.908047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.214 [2024-07-14 03:38:47.994122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.214 [2024-07-14 03:38:47.994307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.152 03:38:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.152 03:38:48 -- common/autotest_common.sh@852 -- # return 0 00:06:30.152 03:38:48 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2265656 00:06:30.152 03:38:48 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.152 03:38:48 -- event/cpu_locks.sh@103 -- # waitforlisten 2265656 /var/tmp/spdk2.sock 00:06:30.152 03:38:48 -- common/autotest_common.sh@819 -- # '[' -z 2265656 ']' 00:06:30.152 03:38:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.152 03:38:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.152 03:38:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.152 03:38:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.152 03:38:48 -- common/autotest_common.sh@10 -- # set +x 00:06:30.152 [2024-07-14 03:38:48.790724] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:30.152 [2024-07-14 03:38:48.790803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265656 ] 00:06:30.152 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.152 [2024-07-14 03:38:48.887490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.152 [2024-07-14 03:38:49.065375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.152 [2024-07-14 03:38:49.065573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.091 03:38:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.091 03:38:49 -- common/autotest_common.sh@852 -- # return 0 00:06:31.091 03:38:49 -- event/cpu_locks.sh@105 -- # locks_exist 2265656 00:06:31.091 03:38:49 -- event/cpu_locks.sh@22 -- # lslocks -p 2265656 00:06:31.091 03:38:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.684 lslocks: write error 00:06:31.684 03:38:50 -- event/cpu_locks.sh@107 -- # killprocess 2265522 00:06:31.684 03:38:50 -- common/autotest_common.sh@926 -- # '[' -z 2265522 ']' 00:06:31.684 03:38:50 -- common/autotest_common.sh@930 -- # kill -0 2265522 00:06:31.684 03:38:50 -- common/autotest_common.sh@931 -- # uname 00:06:31.684 03:38:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.684 03:38:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2265522 00:06:31.684 03:38:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.684 03:38:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.684 03:38:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2265522' 00:06:31.684 killing process with pid 2265522 00:06:31.684 03:38:50 -- common/autotest_common.sh@945 -- # kill 2265522 00:06:31.684 03:38:50 -- common/autotest_common.sh@950 -- # wait 2265522 00:06:32.253 03:38:51 -- event/cpu_locks.sh@108 -- # killprocess 2265656 00:06:32.253 03:38:51 -- common/autotest_common.sh@926 -- # '[' -z 2265656 ']' 00:06:32.253 03:38:51 -- common/autotest_common.sh@930 -- # kill -0 2265656 00:06:32.253 03:38:51 -- common/autotest_common.sh@931 -- # uname 00:06:32.253 03:38:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.253 03:38:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2265656 00:06:32.253 03:38:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.253 03:38:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.253 03:38:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2265656' 00:06:32.253 killing process with pid 2265656 00:06:32.253 03:38:51 -- common/autotest_common.sh@945 -- # kill 2265656 00:06:32.514 03:38:51 -- common/autotest_common.sh@950 -- # wait 2265656 00:06:32.773 00:06:32.773 real 0m3.823s 00:06:32.773 user 0m4.116s 00:06:32.773 sys 0m1.121s 00:06:32.773 03:38:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.773 03:38:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.773 ************************************ 00:06:32.773 END TEST locking_app_on_unlocked_coremask 00:06:32.773 ************************************ 00:06:32.773 03:38:51 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:32.773 03:38:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.773 03:38:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.773 03:38:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.773 ************************************ 00:06:32.773 START TEST locking_app_on_locked_coremask 00:06:32.773 ************************************ 00:06:32.773 03:38:51 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:32.773 03:38:51 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2265970 00:06:32.773 03:38:51 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.773 03:38:51 -- event/cpu_locks.sh@116 -- # waitforlisten 2265970 /var/tmp/spdk.sock 00:06:32.773 03:38:51 -- common/autotest_common.sh@819 -- # '[' -z 2265970 ']' 00:06:32.773 03:38:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.773 03:38:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.773 03:38:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.773 03:38:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.773 03:38:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.773 [2024-07-14 03:38:51.694422] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:32.773 [2024-07-14 03:38:51.694521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265970 ] 00:06:33.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.032 [2024-07-14 03:38:51.754890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.032 [2024-07-14 03:38:51.844331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.032 [2024-07-14 03:38:51.844495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.971 03:38:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.971 03:38:52 -- common/autotest_common.sh@852 -- # return 0 00:06:33.971 03:38:52 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2266112 00:06:33.971 03:38:52 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.971 03:38:52 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2266112 /var/tmp/spdk2.sock 00:06:33.971 03:38:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:33.971 03:38:52 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2266112 /var/tmp/spdk2.sock 00:06:33.971 03:38:52 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:33.971 03:38:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.971 03:38:52 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:33.971 03:38:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.971 03:38:52 -- common/autotest_common.sh@643 -- # waitforlisten 2266112 /var/tmp/spdk2.sock 00:06:33.971 03:38:52 -- common/autotest_common.sh@819 -- # '[' -z 2266112 ']' 00:06:33.971 03:38:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.971 03:38:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.971 03:38:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.971 03:38:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.971 03:38:52 -- common/autotest_common.sh@10 -- # set +x 00:06:33.971 [2024-07-14 03:38:52.678481] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:33.972 [2024-07-14 03:38:52.678565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266112 ] 00:06:33.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.972 [2024-07-14 03:38:52.768560] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2265970 has claimed it. 00:06:33.972 [2024-07-14 03:38:52.768609] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2266112) - No such process 00:06:34.539 ERROR: process (pid: 2266112) is no longer running 00:06:34.539 03:38:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.539 03:38:53 -- common/autotest_common.sh@852 -- # return 1 00:06:34.539 03:38:53 -- common/autotest_common.sh@643 -- # es=1 00:06:34.539 03:38:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.539 03:38:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.539 03:38:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.539 03:38:53 -- event/cpu_locks.sh@122 -- # locks_exist 2265970 00:06:34.539 03:38:53 -- event/cpu_locks.sh@22 -- # lslocks -p 2265970 00:06:34.539 03:38:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.105 lslocks: write error 00:06:35.105 03:38:53 -- event/cpu_locks.sh@124 -- # killprocess 2265970 00:06:35.105 03:38:53 -- common/autotest_common.sh@926 -- # '[' -z 2265970 ']' 00:06:35.105 03:38:53 -- common/autotest_common.sh@930 -- # kill -0 2265970 00:06:35.105 03:38:53 -- common/autotest_common.sh@931 -- # uname 00:06:35.105 03:38:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.105 03:38:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2265970 00:06:35.105 03:38:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.105 03:38:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.105 03:38:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2265970' 00:06:35.105 killing process with pid 2265970 00:06:35.105 03:38:53 -- common/autotest_common.sh@945 -- # kill 2265970 00:06:35.105 03:38:53 -- common/autotest_common.sh@950 -- # wait 2265970 00:06:35.364 00:06:35.364 real 0m2.631s 00:06:35.364 user 0m2.985s 00:06:35.364 sys 0m0.689s 00:06:35.364 03:38:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.364 03:38:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.364 ************************************ 00:06:35.364 END TEST locking_app_on_locked_coremask 00:06:35.364 ************************************ 00:06:35.364 03:38:54 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:35.364 03:38:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.364 03:38:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.364 03:38:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.364 ************************************ 00:06:35.364 START TEST locking_overlapped_coremask 00:06:35.364 ************************************ 00:06:35.364 03:38:54 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:35.364 03:38:54 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2266411 00:06:35.364 03:38:54 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:35.364 03:38:54 -- event/cpu_locks.sh@133 -- # waitforlisten 2266411 /var/tmp/spdk.sock 00:06:35.364 03:38:54 -- common/autotest_common.sh@819 -- # '[' -z 2266411 ']' 00:06:35.364 03:38:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.364 03:38:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.364 03:38:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.364 03:38:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.364 03:38:54 -- common/autotest_common.sh@10 -- # set +x 00:06:35.624 [2024-07-14 03:38:54.349617] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:35.624 [2024-07-14 03:38:54.349712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266411 ] 00:06:35.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.624 [2024-07-14 03:38:54.415359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.624 [2024-07-14 03:38:54.510043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.624 [2024-07-14 03:38:54.512889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.624 [2024-07-14 03:38:54.512951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.624 [2024-07-14 03:38:54.512970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.560 03:38:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.560 03:38:55 -- common/autotest_common.sh@852 -- # return 0 00:06:36.560 03:38:55 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2266551 00:06:36.560 03:38:55 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2266551 /var/tmp/spdk2.sock 00:06:36.560 03:38:55 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:36.560 03:38:55 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.560 03:38:55 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2266551 /var/tmp/spdk2.sock 00:06:36.560 03:38:55 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:36.560 03:38:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.560 03:38:55 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:36.560 03:38:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.560 03:38:55 -- common/autotest_common.sh@643 -- # waitforlisten 2266551 /var/tmp/spdk2.sock 00:06:36.560 03:38:55 -- common/autotest_common.sh@819 -- # '[' -z 2266551 ']' 00:06:36.560 03:38:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.560 03:38:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.560 03:38:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.560 03:38:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.560 03:38:55 -- common/autotest_common.sh@10 -- # set +x 00:06:36.560 [2024-07-14 03:38:55.383136] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:36.560 [2024-07-14 03:38:55.383236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266551 ] 00:06:36.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.560 [2024-07-14 03:38:55.472478] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2266411 has claimed it. 00:06:36.560 [2024-07-14 03:38:55.472524] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2266551) - No such process 00:06:37.129 ERROR: process (pid: 2266551) is no longer running 00:06:37.129 03:38:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.129 03:38:56 -- common/autotest_common.sh@852 -- # return 1 00:06:37.129 03:38:56 -- common/autotest_common.sh@643 -- # es=1 00:06:37.129 03:38:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.129 03:38:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:37.129 03:38:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.129 03:38:56 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:37.129 03:38:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.129 03:38:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.129 03:38:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.129 03:38:56 -- event/cpu_locks.sh@141 -- # killprocess 2266411 00:06:37.129 03:38:56 -- common/autotest_common.sh@926 -- # '[' -z 2266411 ']' 00:06:37.129 03:38:56 -- common/autotest_common.sh@930 -- # kill -0 2266411 00:06:37.129 03:38:56 -- common/autotest_common.sh@931 -- # uname 00:06:37.388 03:38:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.388 03:38:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2266411 00:06:37.388 03:38:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.388 03:38:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.388 03:38:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2266411' 00:06:37.388 killing process with pid 2266411 00:06:37.388 03:38:56 -- common/autotest_common.sh@945 -- # kill 2266411 00:06:37.388 03:38:56 -- common/autotest_common.sh@950 -- # wait 2266411 00:06:37.648 00:06:37.648 real 0m2.178s 00:06:37.648 user 0m6.262s 00:06:37.648 sys 0m0.487s 00:06:37.648 03:38:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.648 03:38:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.648 ************************************ 00:06:37.648 END TEST locking_overlapped_coremask 00:06:37.648 ************************************ 00:06:37.648 03:38:56 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:37.648 03:38:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.648 03:38:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.648 03:38:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.648 ************************************ 00:06:37.648 START TEST locking_overlapped_coremask_via_rpc 00:06:37.648 ************************************ 00:06:37.648 03:38:56 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:37.648 03:38:56 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2266718 00:06:37.648 03:38:56 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:37.648 03:38:56 -- event/cpu_locks.sh@149 -- # waitforlisten 2266718 /var/tmp/spdk.sock 00:06:37.648 03:38:56 -- common/autotest_common.sh@819 -- # '[' -z 2266718 ']' 00:06:37.648 03:38:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.648 03:38:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.648 03:38:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.648 03:38:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.648 03:38:56 -- common/autotest_common.sh@10 -- # set +x 00:06:37.648 [2024-07-14 03:38:56.555304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.648 [2024-07-14 03:38:56.555387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266718 ] 00:06:37.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.907 [2024-07-14 03:38:56.614362] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.907 [2024-07-14 03:38:56.614402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.907 [2024-07-14 03:38:56.702081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.907 [2024-07-14 03:38:56.702276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.907 [2024-07-14 03:38:56.702334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.907 [2024-07-14 03:38:56.702337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.845 03:38:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.845 03:38:57 -- common/autotest_common.sh@852 -- # return 0 00:06:38.845 03:38:57 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2266855 00:06:38.845 03:38:57 -- event/cpu_locks.sh@153 -- # waitforlisten 2266855 /var/tmp/spdk2.sock 00:06:38.845 03:38:57 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:38.845 03:38:57 -- common/autotest_common.sh@819 -- # '[' -z 2266855 ']' 00:06:38.845 03:38:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.845 03:38:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.845 03:38:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.845 03:38:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.845 03:38:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.845 [2024-07-14 03:38:57.552725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.845 [2024-07-14 03:38:57.552802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266855 ] 00:06:38.845 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.845 [2024-07-14 03:38:57.644541] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.845 [2024-07-14 03:38:57.644576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.105 [2024-07-14 03:38:57.814259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.105 [2024-07-14 03:38:57.814489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.105 [2024-07-14 03:38:57.817921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.105 [2024-07-14 03:38:57.817924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.670 03:38:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.670 03:38:58 -- common/autotest_common.sh@852 -- # return 0 00:06:39.670 03:38:58 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.670 03:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.670 03:38:58 -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 03:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.670 03:38:58 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.670 03:38:58 -- common/autotest_common.sh@640 -- # local es=0 00:06:39.670 03:38:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.670 03:38:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:39.670 03:38:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.670 03:38:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:39.670 03:38:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.670 03:38:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.670 03:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.670 03:38:58 -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 [2024-07-14 03:38:58.502957] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2266718 has claimed it. 00:06:39.670 request: 00:06:39.670 { 00:06:39.670 "method": "framework_enable_cpumask_locks", 00:06:39.670 "req_id": 1 00:06:39.670 } 00:06:39.670 Got JSON-RPC error response 00:06:39.670 response: 00:06:39.670 { 00:06:39.670 "code": -32603, 00:06:39.670 "message": "Failed to claim CPU core: 2" 00:06:39.670 } 00:06:39.670 03:38:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:39.670 03:38:58 -- common/autotest_common.sh@643 -- # es=1 00:06:39.670 03:38:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:39.670 03:38:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:39.670 03:38:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:39.670 03:38:58 -- event/cpu_locks.sh@158 -- # waitforlisten 2266718 /var/tmp/spdk.sock 00:06:39.670 03:38:58 -- common/autotest_common.sh@819 -- # '[' -z 2266718 ']' 00:06:39.670 03:38:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.670 03:38:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.670 03:38:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.670 03:38:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.670 03:38:58 -- common/autotest_common.sh@10 -- # set +x 00:06:39.928 03:38:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.928 03:38:58 -- common/autotest_common.sh@852 -- # return 0 00:06:39.928 03:38:58 -- event/cpu_locks.sh@159 -- # waitforlisten 2266855 /var/tmp/spdk2.sock 00:06:39.928 03:38:58 -- common/autotest_common.sh@819 -- # '[' -z 2266855 ']' 00:06:39.928 03:38:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.928 03:38:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.928 03:38:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.928 03:38:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.928 03:38:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.187 03:38:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.187 03:38:58 -- common/autotest_common.sh@852 -- # return 0 00:06:40.187 03:38:58 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.187 03:38:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.187 03:38:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.187 03:38:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.187 00:06:40.187 real 0m2.486s 00:06:40.187 user 0m1.202s 00:06:40.187 sys 0m0.213s 00:06:40.187 03:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.187 03:38:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.187 ************************************ 00:06:40.187 END TEST locking_overlapped_coremask_via_rpc 00:06:40.187 ************************************ 00:06:40.187 03:38:59 -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.187 03:38:59 -- event/cpu_locks.sh@15 -- # [[ -z 2266718 ]] 00:06:40.187 03:38:59 -- event/cpu_locks.sh@15 -- # killprocess 2266718 00:06:40.187 03:38:59 -- common/autotest_common.sh@926 -- # '[' -z 2266718 ']' 00:06:40.187 03:38:59 -- common/autotest_common.sh@930 -- # kill -0 2266718 00:06:40.187 03:38:59 -- common/autotest_common.sh@931 -- # uname 00:06:40.187 03:38:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.187 03:38:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2266718 00:06:40.187 03:38:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.187 03:38:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.187 03:38:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2266718' 00:06:40.187 killing process with pid 2266718 00:06:40.187 03:38:59 -- common/autotest_common.sh@945 -- # kill 2266718 00:06:40.187 03:38:59 -- common/autotest_common.sh@950 -- # wait 2266718 00:06:40.755 03:38:59 -- event/cpu_locks.sh@16 -- # [[ -z 2266855 ]] 00:06:40.755 03:38:59 -- event/cpu_locks.sh@16 -- # killprocess 2266855 00:06:40.755 03:38:59 -- common/autotest_common.sh@926 -- # '[' -z 2266855 ']' 00:06:40.755 03:38:59 -- common/autotest_common.sh@930 -- # kill -0 2266855 00:06:40.755 03:38:59 -- common/autotest_common.sh@931 -- # uname 00:06:40.755 03:38:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.755 03:38:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2266855 00:06:40.755 03:38:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:40.755 03:38:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:40.755 03:38:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2266855' 00:06:40.755 killing process with pid 2266855 00:06:40.755 03:38:59 -- common/autotest_common.sh@945 -- # kill 2266855 00:06:40.755 03:38:59 -- common/autotest_common.sh@950 -- # wait 2266855 00:06:41.013 03:38:59 -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.013 03:38:59 -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.013 03:38:59 -- event/cpu_locks.sh@15 -- # [[ -z 2266718 ]] 00:06:41.013 03:38:59 -- event/cpu_locks.sh@15 -- # killprocess 2266718 00:06:41.013 03:38:59 -- common/autotest_common.sh@926 -- # '[' -z 2266718 ']' 00:06:41.013 03:38:59 -- common/autotest_common.sh@930 -- # kill -0 2266718 00:06:41.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2266718) - No such process 00:06:41.013 03:38:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2266718 is not found' 00:06:41.013 Process with pid 2266718 is not found 00:06:41.013 03:38:59 -- event/cpu_locks.sh@16 -- # [[ -z 2266855 ]] 00:06:41.013 03:38:59 -- event/cpu_locks.sh@16 -- # killprocess 2266855 00:06:41.013 03:38:59 -- common/autotest_common.sh@926 -- # '[' -z 2266855 ']' 00:06:41.013 03:38:59 -- common/autotest_common.sh@930 -- # kill -0 2266855 00:06:41.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2266855) - No such process 00:06:41.013 03:38:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2266855 is not found' 00:06:41.013 Process with pid 2266855 is not found 00:06:41.013 03:38:59 -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.013 00:06:41.013 real 0m19.323s 00:06:41.013 user 0m34.398s 00:06:41.013 sys 0m5.574s 00:06:41.013 03:38:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.013 03:38:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.013 ************************************ 00:06:41.013 END TEST cpu_locks 00:06:41.013 ************************************ 00:06:41.013 00:06:41.013 real 0m45.847s 00:06:41.013 user 1m27.896s 00:06:41.013 sys 0m9.388s 00:06:41.013 03:38:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.013 03:38:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.013 ************************************ 00:06:41.013 END TEST event 00:06:41.013 ************************************ 00:06:41.013 03:38:59 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.013 03:38:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.013 03:38:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.013 03:38:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.013 ************************************ 00:06:41.013 START TEST thread 00:06:41.013 ************************************ 00:06:41.013 03:38:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:41.013 * Looking for test storage... 00:06:41.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:41.272 03:38:59 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.272 03:38:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:41.272 03:38:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.272 03:38:59 -- common/autotest_common.sh@10 -- # set +x 00:06:41.272 ************************************ 00:06:41.272 START TEST thread_poller_perf 00:06:41.272 ************************************ 00:06:41.272 03:38:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.272 [2024-07-14 03:38:59.972622] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.272 [2024-07-14 03:38:59.972702] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267232 ] 00:06:41.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.272 [2024-07-14 03:39:00.034994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.272 [2024-07-14 03:39:00.121525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.272 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:42.646 ====================================== 00:06:42.646 busy:2711944394 (cyc) 00:06:42.647 total_run_count: 285000 00:06:42.647 tsc_hz: 2700000000 (cyc) 00:06:42.647 ====================================== 00:06:42.647 poller_cost: 9515 (cyc), 3524 (nsec) 00:06:42.647 00:06:42.647 real 0m1.248s 00:06:42.647 user 0m1.163s 00:06:42.647 sys 0m0.078s 00:06:42.647 03:39:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.647 03:39:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.647 ************************************ 00:06:42.647 END TEST thread_poller_perf 00:06:42.647 ************************************ 00:06:42.647 03:39:01 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:42.647 03:39:01 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.647 03:39:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.647 03:39:01 -- common/autotest_common.sh@10 -- # set +x 00:06:42.647 ************************************ 00:06:42.647 START TEST thread_poller_perf 00:06:42.647 ************************************ 00:06:42.647 03:39:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:42.647 [2024-07-14 03:39:01.247590] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.647 [2024-07-14 03:39:01.247677] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267466 ] 00:06:42.647 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.647 [2024-07-14 03:39:01.309753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.647 [2024-07-14 03:39:01.399293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.647 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:43.578 ====================================== 00:06:43.578 busy:2703764301 (cyc) 00:06:43.578 total_run_count: 3834000 00:06:43.578 tsc_hz: 2700000000 (cyc) 00:06:43.578 ====================================== 00:06:43.578 poller_cost: 705 (cyc), 261 (nsec) 00:06:43.578 00:06:43.578 real 0m1.249s 00:06:43.578 user 0m1.156s 00:06:43.578 sys 0m0.086s 00:06:43.578 03:39:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.578 03:39:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.578 ************************************ 00:06:43.578 END TEST thread_poller_perf 00:06:43.578 ************************************ 00:06:43.578 03:39:02 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:43.578 00:06:43.578 real 0m2.593s 00:06:43.578 user 0m2.360s 00:06:43.578 sys 0m0.232s 00:06:43.578 03:39:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.578 03:39:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.578 ************************************ 00:06:43.578 END TEST thread 00:06:43.578 ************************************ 00:06:43.836 03:39:02 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:43.836 03:39:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:43.836 03:39:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.836 03:39:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.836 ************************************ 00:06:43.836 START TEST accel 00:06:43.836 ************************************ 00:06:43.836 03:39:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:43.836 * Looking for test storage... 00:06:43.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:43.836 03:39:02 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:43.836 03:39:02 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:43.836 03:39:02 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.836 03:39:02 -- accel/accel.sh@59 -- # spdk_tgt_pid=2267694 00:06:43.836 03:39:02 -- accel/accel.sh@60 -- # waitforlisten 2267694 00:06:43.836 03:39:02 -- common/autotest_common.sh@819 -- # '[' -z 2267694 ']' 00:06:43.836 03:39:02 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:43.836 03:39:02 -- accel/accel.sh@58 -- # build_accel_config 00:06:43.836 03:39:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.836 03:39:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.836 03:39:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.836 03:39:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.836 03:39:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.836 03:39:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.836 03:39:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.836 03:39:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.836 03:39:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.836 03:39:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.836 03:39:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.836 03:39:02 -- accel/accel.sh@42 -- # jq -r . 00:06:43.836 [2024-07-14 03:39:02.630418] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:43.836 [2024-07-14 03:39:02.630495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267694 ] 00:06:43.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.836 [2024-07-14 03:39:02.692037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.095 [2024-07-14 03:39:02.780299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.095 [2024-07-14 03:39:02.780472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.662 03:39:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.662 03:39:03 -- common/autotest_common.sh@852 -- # return 0 00:06:44.662 03:39:03 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:44.662 03:39:03 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:44.662 03:39:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.662 03:39:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.662 03:39:03 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:44.662 03:39:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.662 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.662 03:39:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.662 03:39:03 -- accel/accel.sh@64 -- # IFS== 00:06:44.663 03:39:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.663 03:39:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.663 03:39:03 -- accel/accel.sh@67 -- # killprocess 2267694 00:06:44.663 03:39:03 -- common/autotest_common.sh@926 -- # '[' -z 2267694 ']' 00:06:44.663 03:39:03 -- common/autotest_common.sh@930 -- # kill -0 2267694 00:06:44.920 03:39:03 -- common/autotest_common.sh@931 -- # uname 00:06:44.920 03:39:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.920 03:39:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2267694 00:06:44.920 03:39:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.920 03:39:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.920 03:39:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2267694' 00:06:44.920 killing process with pid 2267694 00:06:44.920 03:39:03 -- common/autotest_common.sh@945 -- # kill 2267694 00:06:44.920 03:39:03 -- common/autotest_common.sh@950 -- # wait 2267694 00:06:45.177 03:39:04 -- accel/accel.sh@68 -- # trap - ERR 00:06:45.177 03:39:04 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:45.177 03:39:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:45.177 03:39:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.177 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:45.177 03:39:04 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:45.177 03:39:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:45.177 03:39:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.177 03:39:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.177 03:39:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.177 03:39:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.177 03:39:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.177 03:39:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.177 03:39:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.177 03:39:04 -- accel/accel.sh@42 -- # jq -r . 00:06:45.177 03:39:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.177 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:45.177 03:39:04 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:45.177 03:39:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:45.177 03:39:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.177 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:45.177 ************************************ 00:06:45.177 START TEST accel_missing_filename 00:06:45.177 ************************************ 00:06:45.177 03:39:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:45.177 03:39:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.177 03:39:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:45.177 03:39:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.177 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.177 03:39:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.177 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.177 03:39:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:45.177 03:39:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:45.177 03:39:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.177 03:39:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.177 03:39:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.177 03:39:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.177 03:39:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.177 03:39:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.177 03:39:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.177 03:39:04 -- accel/accel.sh@42 -- # jq -r . 00:06:45.177 [2024-07-14 03:39:04.109989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.177 [2024-07-14 03:39:04.110065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267900 ] 00:06:45.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.443 [2024-07-14 03:39:04.174178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.443 [2024-07-14 03:39:04.265465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.443 [2024-07-14 03:39:04.321738] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.751 [2024-07-14 03:39:04.395602] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:45.751 A filename is required. 00:06:45.751 03:39:04 -- common/autotest_common.sh@643 -- # es=234 00:06:45.751 03:39:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.751 03:39:04 -- common/autotest_common.sh@652 -- # es=106 00:06:45.751 03:39:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:45.751 03:39:04 -- common/autotest_common.sh@660 -- # es=1 00:06:45.751 03:39:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.751 00:06:45.752 real 0m0.384s 00:06:45.752 user 0m0.276s 00:06:45.752 sys 0m0.139s 00:06:45.752 03:39:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.752 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:45.752 ************************************ 00:06:45.752 END TEST accel_missing_filename 00:06:45.752 ************************************ 00:06:45.752 03:39:04 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.752 03:39:04 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:45.752 03:39:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.752 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:45.752 ************************************ 00:06:45.752 START TEST accel_compress_verify 00:06:45.752 ************************************ 00:06:45.752 03:39:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.752 03:39:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.752 03:39:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.752 03:39:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.752 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.752 03:39:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.752 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.752 03:39:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.752 03:39:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:45.752 03:39:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.752 03:39:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.752 03:39:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.752 03:39:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.752 03:39:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.752 03:39:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.752 03:39:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.752 03:39:04 -- accel/accel.sh@42 -- # jq -r . 00:06:45.752 [2024-07-14 03:39:04.519011] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.752 [2024-07-14 03:39:04.519087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268015 ] 00:06:45.752 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.752 [2024-07-14 03:39:04.581259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.752 [2024-07-14 03:39:04.671691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.012 [2024-07-14 03:39:04.733460] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.012 [2024-07-14 03:39:04.815238] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:46.012 00:06:46.012 Compression does not support the verify option, aborting. 00:06:46.012 03:39:04 -- common/autotest_common.sh@643 -- # es=161 00:06:46.012 03:39:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.012 03:39:04 -- common/autotest_common.sh@652 -- # es=33 00:06:46.012 03:39:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:46.012 03:39:04 -- common/autotest_common.sh@660 -- # es=1 00:06:46.012 03:39:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.012 00:06:46.012 real 0m0.397s 00:06:46.012 user 0m0.293s 00:06:46.012 sys 0m0.136s 00:06:46.012 03:39:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.012 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:46.012 ************************************ 00:06:46.012 END TEST accel_compress_verify 00:06:46.012 ************************************ 00:06:46.012 03:39:04 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:46.012 03:39:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.012 03:39:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.012 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:46.012 ************************************ 00:06:46.012 START TEST accel_wrong_workload 00:06:46.012 ************************************ 00:06:46.012 03:39:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:46.012 03:39:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.012 03:39:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:46.012 03:39:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.012 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.012 03:39:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.012 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.012 03:39:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:46.012 03:39:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:46.012 03:39:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.012 03:39:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.012 03:39:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.012 03:39:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.012 03:39:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.012 03:39:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.012 03:39:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.012 03:39:04 -- accel/accel.sh@42 -- # jq -r . 00:06:46.012 Unsupported workload type: foobar 00:06:46.012 [2024-07-14 03:39:04.937561] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:46.012 accel_perf options: 00:06:46.012 [-h help message] 00:06:46.012 [-q queue depth per core] 00:06:46.012 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:46.012 [-T number of threads per core 00:06:46.012 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:46.012 [-t time in seconds] 00:06:46.012 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:46.012 [ dif_verify, , dif_generate, dif_generate_copy 00:06:46.012 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:46.012 [-l for compress/decompress workloads, name of uncompressed input file 00:06:46.012 [-S for crc32c workload, use this seed value (default 0) 00:06:46.012 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:46.012 [-f for fill workload, use this BYTE value (default 255) 00:06:46.012 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:46.012 [-y verify result if this switch is on] 00:06:46.012 [-a tasks to allocate per core (default: same value as -q)] 00:06:46.012 Can be used to spread operations across a wider range of memory. 00:06:46.012 03:39:04 -- common/autotest_common.sh@643 -- # es=1 00:06:46.012 03:39:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.012 03:39:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.012 03:39:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.012 00:06:46.013 real 0m0.022s 00:06:46.013 user 0m0.010s 00:06:46.013 sys 0m0.012s 00:06:46.013 03:39:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.013 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:46.013 ************************************ 00:06:46.013 END TEST accel_wrong_workload 00:06:46.013 ************************************ 00:06:46.271 Error: writing output failed: Broken pipe 00:06:46.271 03:39:04 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.271 03:39:04 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:46.271 03:39:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.271 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:46.271 ************************************ 00:06:46.271 START TEST accel_negative_buffers 00:06:46.271 ************************************ 00:06:46.271 03:39:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.271 03:39:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.271 03:39:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:46.271 03:39:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.271 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.271 03:39:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.271 03:39:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.271 03:39:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:46.271 03:39:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:46.271 03:39:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.271 03:39:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.271 03:39:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.271 03:39:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.271 03:39:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.271 03:39:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.271 03:39:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.271 03:39:04 -- accel/accel.sh@42 -- # jq -r . 00:06:46.271 -x option must be non-negative. 00:06:46.271 [2024-07-14 03:39:04.986830] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:46.271 accel_perf options: 00:06:46.271 [-h help message] 00:06:46.271 [-q queue depth per core] 00:06:46.271 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:46.271 [-T number of threads per core 00:06:46.271 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:46.271 [-t time in seconds] 00:06:46.271 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:46.271 [ dif_verify, , dif_generate, dif_generate_copy 00:06:46.271 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:46.271 [-l for compress/decompress workloads, name of uncompressed input file 00:06:46.271 [-S for crc32c workload, use this seed value (default 0) 00:06:46.271 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:46.271 [-f for fill workload, use this BYTE value (default 255) 00:06:46.271 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:46.271 [-y verify result if this switch is on] 00:06:46.271 [-a tasks to allocate per core (default: same value as -q)] 00:06:46.271 Can be used to spread operations across a wider range of memory. 00:06:46.271 03:39:04 -- common/autotest_common.sh@643 -- # es=1 00:06:46.271 03:39:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.271 03:39:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.271 03:39:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.271 00:06:46.271 real 0m0.024s 00:06:46.271 user 0m0.016s 00:06:46.271 sys 0m0.008s 00:06:46.271 03:39:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.271 03:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:46.271 ************************************ 00:06:46.271 END TEST accel_negative_buffers 00:06:46.271 ************************************ 00:06:46.271 Error: writing output failed: Broken pipe 00:06:46.271 03:39:05 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:46.272 03:39:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:46.272 03:39:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.272 03:39:05 -- common/autotest_common.sh@10 -- # set +x 00:06:46.272 ************************************ 00:06:46.272 START TEST accel_crc32c 00:06:46.272 ************************************ 00:06:46.272 03:39:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:46.272 03:39:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.272 03:39:05 -- accel/accel.sh@17 -- # local accel_module 00:06:46.272 03:39:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:46.272 03:39:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:46.272 03:39:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.272 03:39:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.272 03:39:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.272 03:39:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.272 03:39:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.272 03:39:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.272 03:39:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.272 03:39:05 -- accel/accel.sh@42 -- # jq -r . 00:06:46.272 [2024-07-14 03:39:05.030170] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:46.272 [2024-07-14 03:39:05.030239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268081 ] 00:06:46.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.272 [2024-07-14 03:39:05.092632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.272 [2024-07-14 03:39:05.183949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.647 03:39:06 -- accel/accel.sh@18 -- # out=' 00:06:47.647 SPDK Configuration: 00:06:47.647 Core mask: 0x1 00:06:47.647 00:06:47.647 Accel Perf Configuration: 00:06:47.647 Workload Type: crc32c 00:06:47.647 CRC-32C seed: 32 00:06:47.647 Transfer size: 4096 bytes 00:06:47.647 Vector count 1 00:06:47.647 Module: software 00:06:47.647 Queue depth: 32 00:06:47.647 Allocate depth: 32 00:06:47.647 # threads/core: 1 00:06:47.647 Run time: 1 seconds 00:06:47.647 Verify: Yes 00:06:47.647 00:06:47.647 Running for 1 seconds... 00:06:47.647 00:06:47.647 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.647 ------------------------------------------------------------------------------------ 00:06:47.647 0,0 404992/s 1582 MiB/s 0 0 00:06:47.647 ==================================================================================== 00:06:47.647 Total 404992/s 1582 MiB/s 0 0' 00:06:47.647 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.647 03:39:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.647 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.647 03:39:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.647 03:39:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.647 03:39:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.647 03:39:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.647 03:39:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.647 03:39:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.647 03:39:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.647 03:39:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.647 03:39:06 -- accel/accel.sh@42 -- # jq -r . 00:06:47.647 [2024-07-14 03:39:06.434973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:47.647 [2024-07-14 03:39:06.435052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268314 ] 00:06:47.647 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.647 [2024-07-14 03:39:06.497499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.647 [2024-07-14 03:39:06.587554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val= 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val= 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=0x1 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val= 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val= 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=crc32c 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=32 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val= 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=software 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=32 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=32 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=1 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val=Yes 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val= 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:47.907 03:39:06 -- accel/accel.sh@21 -- # val= 00:06:47.907 03:39:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # IFS=: 00:06:47.907 03:39:06 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 03:39:07 -- accel/accel.sh@21 -- # val= 00:06:49.287 03:39:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # IFS=: 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 03:39:07 -- accel/accel.sh@21 -- # val= 00:06:49.287 03:39:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # IFS=: 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 03:39:07 -- accel/accel.sh@21 -- # val= 00:06:49.287 03:39:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # IFS=: 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 03:39:07 -- accel/accel.sh@21 -- # val= 00:06:49.287 03:39:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # IFS=: 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 03:39:07 -- accel/accel.sh@21 -- # val= 00:06:49.287 03:39:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # IFS=: 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 03:39:07 -- accel/accel.sh@21 -- # val= 00:06:49.287 03:39:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # IFS=: 00:06:49.287 03:39:07 -- accel/accel.sh@20 -- # read -r var val 00:06:49.287 03:39:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.287 03:39:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:49.287 03:39:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.287 00:06:49.287 real 0m2.788s 00:06:49.287 user 0m2.494s 00:06:49.287 sys 0m0.286s 00:06:49.287 03:39:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.287 03:39:07 -- common/autotest_common.sh@10 -- # set +x 00:06:49.287 ************************************ 00:06:49.287 END TEST accel_crc32c 00:06:49.287 ************************************ 00:06:49.287 03:39:07 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:49.287 03:39:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:49.287 03:39:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.287 03:39:07 -- common/autotest_common.sh@10 -- # set +x 00:06:49.287 ************************************ 00:06:49.287 START TEST accel_crc32c_C2 00:06:49.287 ************************************ 00:06:49.287 03:39:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:49.287 03:39:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.287 03:39:07 -- accel/accel.sh@17 -- # local accel_module 00:06:49.287 03:39:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:49.287 03:39:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:49.287 03:39:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.287 03:39:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.287 03:39:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.287 03:39:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.287 03:39:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.287 03:39:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.287 03:39:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.287 03:39:07 -- accel/accel.sh@42 -- # jq -r . 00:06:49.287 [2024-07-14 03:39:07.846951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:49.287 [2024-07-14 03:39:07.847039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268746 ] 00:06:49.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.287 [2024-07-14 03:39:07.912310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.287 [2024-07-14 03:39:08.002582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.667 03:39:09 -- accel/accel.sh@18 -- # out=' 00:06:50.667 SPDK Configuration: 00:06:50.667 Core mask: 0x1 00:06:50.667 00:06:50.667 Accel Perf Configuration: 00:06:50.667 Workload Type: crc32c 00:06:50.667 CRC-32C seed: 0 00:06:50.667 Transfer size: 4096 bytes 00:06:50.667 Vector count 2 00:06:50.667 Module: software 00:06:50.667 Queue depth: 32 00:06:50.667 Allocate depth: 32 00:06:50.667 # threads/core: 1 00:06:50.667 Run time: 1 seconds 00:06:50.667 Verify: Yes 00:06:50.667 00:06:50.667 Running for 1 seconds... 00:06:50.667 00:06:50.667 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.667 ------------------------------------------------------------------------------------ 00:06:50.667 0,0 316032/s 2469 MiB/s 0 0 00:06:50.667 ==================================================================================== 00:06:50.667 Total 316032/s 1234 MiB/s 0 0' 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.667 03:39:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.667 03:39:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.667 03:39:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.667 03:39:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.667 03:39:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.667 03:39:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.667 03:39:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.667 03:39:09 -- accel/accel.sh@42 -- # jq -r . 00:06:50.667 [2024-07-14 03:39:09.236748] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.667 [2024-07-14 03:39:09.236831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269147 ] 00:06:50.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.667 [2024-07-14 03:39:09.297649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.667 [2024-07-14 03:39:09.386725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val= 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val= 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=0x1 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val= 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val= 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=crc32c 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=0 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val= 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=software 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=32 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=32 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=1 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val=Yes 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val= 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:50.667 03:39:09 -- accel/accel.sh@21 -- # val= 00:06:50.667 03:39:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # IFS=: 00:06:50.667 03:39:09 -- accel/accel.sh@20 -- # read -r var val 00:06:52.046 03:39:10 -- accel/accel.sh@21 -- # val= 00:06:52.046 03:39:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # IFS=: 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # read -r var val 00:06:52.046 03:39:10 -- accel/accel.sh@21 -- # val= 00:06:52.046 03:39:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # IFS=: 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # read -r var val 00:06:52.046 03:39:10 -- accel/accel.sh@21 -- # val= 00:06:52.046 03:39:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # IFS=: 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # read -r var val 00:06:52.046 03:39:10 -- accel/accel.sh@21 -- # val= 00:06:52.046 03:39:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # IFS=: 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # read -r var val 00:06:52.046 03:39:10 -- accel/accel.sh@21 -- # val= 00:06:52.046 03:39:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # IFS=: 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # read -r var val 00:06:52.046 03:39:10 -- accel/accel.sh@21 -- # val= 00:06:52.046 03:39:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # IFS=: 00:06:52.046 03:39:10 -- accel/accel.sh@20 -- # read -r var val 00:06:52.046 03:39:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.046 03:39:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:52.046 03:39:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.046 00:06:52.046 real 0m2.791s 00:06:52.046 user 0m2.490s 00:06:52.046 sys 0m0.293s 00:06:52.046 03:39:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.046 03:39:10 -- common/autotest_common.sh@10 -- # set +x 00:06:52.046 ************************************ 00:06:52.046 END TEST accel_crc32c_C2 00:06:52.046 ************************************ 00:06:52.046 03:39:10 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:52.046 03:39:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:52.046 03:39:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.046 03:39:10 -- common/autotest_common.sh@10 -- # set +x 00:06:52.046 ************************************ 00:06:52.046 START TEST accel_copy 00:06:52.046 ************************************ 00:06:52.046 03:39:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:52.046 03:39:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.046 03:39:10 -- accel/accel.sh@17 -- # local accel_module 00:06:52.046 03:39:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:52.046 03:39:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:52.046 03:39:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.046 03:39:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.046 03:39:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.046 03:39:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.046 03:39:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.046 03:39:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.046 03:39:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.046 03:39:10 -- accel/accel.sh@42 -- # jq -r . 00:06:52.046 [2024-07-14 03:39:10.656751] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:52.046 [2024-07-14 03:39:10.656832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269308 ] 00:06:52.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.046 [2024-07-14 03:39:10.719370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.046 [2024-07-14 03:39:10.810448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.426 03:39:12 -- accel/accel.sh@18 -- # out=' 00:06:53.426 SPDK Configuration: 00:06:53.426 Core mask: 0x1 00:06:53.426 00:06:53.426 Accel Perf Configuration: 00:06:53.426 Workload Type: copy 00:06:53.426 Transfer size: 4096 bytes 00:06:53.426 Vector count 1 00:06:53.426 Module: software 00:06:53.426 Queue depth: 32 00:06:53.426 Allocate depth: 32 00:06:53.426 # threads/core: 1 00:06:53.426 Run time: 1 seconds 00:06:53.426 Verify: Yes 00:06:53.426 00:06:53.426 Running for 1 seconds... 00:06:53.426 00:06:53.426 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.426 ------------------------------------------------------------------------------------ 00:06:53.426 0,0 278752/s 1088 MiB/s 0 0 00:06:53.426 ==================================================================================== 00:06:53.426 Total 278752/s 1088 MiB/s 0 0' 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:53.426 03:39:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.426 03:39:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.426 03:39:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.426 03:39:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.426 03:39:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.426 03:39:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.426 03:39:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.426 03:39:12 -- accel/accel.sh@42 -- # jq -r . 00:06:53.426 [2024-07-14 03:39:12.054623] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:53.426 [2024-07-14 03:39:12.054704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269500 ] 00:06:53.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.426 [2024-07-14 03:39:12.117487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.426 [2024-07-14 03:39:12.208577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val= 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val= 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val=0x1 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val= 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val= 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val=copy 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val= 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val=software 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val=32 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val=32 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val=1 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.426 03:39:12 -- accel/accel.sh@21 -- # val=Yes 00:06:53.426 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.426 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.427 03:39:12 -- accel/accel.sh@21 -- # val= 00:06:53.427 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.427 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.427 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.427 03:39:12 -- accel/accel.sh@21 -- # val= 00:06:53.427 03:39:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.427 03:39:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.427 03:39:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.810 03:39:13 -- accel/accel.sh@21 -- # val= 00:06:54.810 03:39:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.810 03:39:13 -- accel/accel.sh@21 -- # val= 00:06:54.810 03:39:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.810 03:39:13 -- accel/accel.sh@21 -- # val= 00:06:54.810 03:39:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.810 03:39:13 -- accel/accel.sh@21 -- # val= 00:06:54.810 03:39:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.810 03:39:13 -- accel/accel.sh@21 -- # val= 00:06:54.810 03:39:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.810 03:39:13 -- accel/accel.sh@21 -- # val= 00:06:54.810 03:39:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # IFS=: 00:06:54.810 03:39:13 -- accel/accel.sh@20 -- # read -r var val 00:06:54.810 03:39:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.810 03:39:13 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:54.810 03:39:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.810 00:06:54.810 real 0m2.790s 00:06:54.810 user 0m2.493s 00:06:54.810 sys 0m0.289s 00:06:54.810 03:39:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.810 03:39:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.810 ************************************ 00:06:54.810 END TEST accel_copy 00:06:54.810 ************************************ 00:06:54.810 03:39:13 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.810 03:39:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:54.810 03:39:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.810 03:39:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.810 ************************************ 00:06:54.810 START TEST accel_fill 00:06:54.811 ************************************ 00:06:54.811 03:39:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.811 03:39:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.811 03:39:13 -- accel/accel.sh@17 -- # local accel_module 00:06:54.811 03:39:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.811 03:39:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.811 03:39:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.811 03:39:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.811 03:39:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.811 03:39:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.811 03:39:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.811 03:39:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.811 03:39:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.811 03:39:13 -- accel/accel.sh@42 -- # jq -r . 00:06:54.811 [2024-07-14 03:39:13.470935] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.811 [2024-07-14 03:39:13.471006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269731 ] 00:06:54.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.811 [2024-07-14 03:39:13.534017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.811 [2024-07-14 03:39:13.624729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.189 03:39:14 -- accel/accel.sh@18 -- # out=' 00:06:56.189 SPDK Configuration: 00:06:56.189 Core mask: 0x1 00:06:56.189 00:06:56.189 Accel Perf Configuration: 00:06:56.189 Workload Type: fill 00:06:56.189 Fill pattern: 0x80 00:06:56.189 Transfer size: 4096 bytes 00:06:56.189 Vector count 1 00:06:56.189 Module: software 00:06:56.189 Queue depth: 64 00:06:56.189 Allocate depth: 64 00:06:56.189 # threads/core: 1 00:06:56.189 Run time: 1 seconds 00:06:56.189 Verify: Yes 00:06:56.189 00:06:56.189 Running for 1 seconds... 00:06:56.189 00:06:56.189 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.189 ------------------------------------------------------------------------------------ 00:06:56.189 0,0 404864/s 1581 MiB/s 0 0 00:06:56.189 ==================================================================================== 00:06:56.189 Total 404864/s 1581 MiB/s 0 0' 00:06:56.189 03:39:14 -- accel/accel.sh@20 -- # IFS=: 00:06:56.189 03:39:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.189 03:39:14 -- accel/accel.sh@20 -- # read -r var val 00:06:56.189 03:39:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.190 03:39:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.190 03:39:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.190 03:39:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.190 03:39:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.190 03:39:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.190 03:39:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.190 03:39:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.190 03:39:14 -- accel/accel.sh@42 -- # jq -r . 00:06:56.190 [2024-07-14 03:39:14.871038] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:56.190 [2024-07-14 03:39:14.871115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269869 ] 00:06:56.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.190 [2024-07-14 03:39:14.931208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.190 [2024-07-14 03:39:15.021690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val= 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val= 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=0x1 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val= 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val= 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=fill 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=0x80 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val= 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=software 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=64 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=64 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=1 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val=Yes 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val= 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:56.190 03:39:15 -- accel/accel.sh@21 -- # val= 00:06:56.190 03:39:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # IFS=: 00:06:56.190 03:39:15 -- accel/accel.sh@20 -- # read -r var val 00:06:57.569 03:39:16 -- accel/accel.sh@21 -- # val= 00:06:57.569 03:39:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.569 03:39:16 -- accel/accel.sh@21 -- # val= 00:06:57.569 03:39:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.569 03:39:16 -- accel/accel.sh@21 -- # val= 00:06:57.569 03:39:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.569 03:39:16 -- accel/accel.sh@21 -- # val= 00:06:57.569 03:39:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.569 03:39:16 -- accel/accel.sh@21 -- # val= 00:06:57.569 03:39:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.569 03:39:16 -- accel/accel.sh@21 -- # val= 00:06:57.569 03:39:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.569 03:39:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.569 03:39:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.569 03:39:16 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:57.569 03:39:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.569 00:06:57.569 real 0m2.799s 00:06:57.569 user 0m2.510s 00:06:57.569 sys 0m0.282s 00:06:57.569 03:39:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.569 03:39:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.569 ************************************ 00:06:57.569 END TEST accel_fill 00:06:57.569 ************************************ 00:06:57.569 03:39:16 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:57.569 03:39:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:57.569 03:39:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.569 03:39:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.569 ************************************ 00:06:57.569 START TEST accel_copy_crc32c 00:06:57.569 ************************************ 00:06:57.569 03:39:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:57.569 03:39:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.569 03:39:16 -- accel/accel.sh@17 -- # local accel_module 00:06:57.569 03:39:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:57.569 03:39:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:57.569 03:39:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.569 03:39:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.569 03:39:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.569 03:39:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.569 03:39:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.569 03:39:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.569 03:39:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.569 03:39:16 -- accel/accel.sh@42 -- # jq -r . 00:06:57.569 [2024-07-14 03:39:16.292391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.569 [2024-07-14 03:39:16.292470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270033 ] 00:06:57.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.569 [2024-07-14 03:39:16.358059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.569 [2024-07-14 03:39:16.448790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.944 03:39:17 -- accel/accel.sh@18 -- # out=' 00:06:58.944 SPDK Configuration: 00:06:58.944 Core mask: 0x1 00:06:58.944 00:06:58.944 Accel Perf Configuration: 00:06:58.944 Workload Type: copy_crc32c 00:06:58.944 CRC-32C seed: 0 00:06:58.944 Vector size: 4096 bytes 00:06:58.944 Transfer size: 4096 bytes 00:06:58.944 Vector count 1 00:06:58.945 Module: software 00:06:58.945 Queue depth: 32 00:06:58.945 Allocate depth: 32 00:06:58.945 # threads/core: 1 00:06:58.945 Run time: 1 seconds 00:06:58.945 Verify: Yes 00:06:58.945 00:06:58.945 Running for 1 seconds... 00:06:58.945 00:06:58.945 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.945 ------------------------------------------------------------------------------------ 00:06:58.945 0,0 217792/s 850 MiB/s 0 0 00:06:58.945 ==================================================================================== 00:06:58.945 Total 217792/s 850 MiB/s 0 0' 00:06:58.945 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:58.945 03:39:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.945 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:58.945 03:39:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.945 03:39:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.945 03:39:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.945 03:39:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.945 03:39:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.945 03:39:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.945 03:39:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.945 03:39:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.945 03:39:17 -- accel/accel.sh@42 -- # jq -r . 00:06:58.945 [2024-07-14 03:39:17.699384] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.945 [2024-07-14 03:39:17.699454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270185 ] 00:06:58.945 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.945 [2024-07-14 03:39:17.761218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.945 [2024-07-14 03:39:17.851356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val= 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val= 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=0x1 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val= 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val= 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=0 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val= 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=software 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=32 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=32 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=1 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val=Yes 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val= 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:06:59.204 03:39:17 -- accel/accel.sh@21 -- # val= 00:06:59.204 03:39:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # IFS=: 00:06:59.204 03:39:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.143 03:39:19 -- accel/accel.sh@21 -- # val= 00:07:00.143 03:39:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.143 03:39:19 -- accel/accel.sh@21 -- # val= 00:07:00.143 03:39:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.143 03:39:19 -- accel/accel.sh@21 -- # val= 00:07:00.143 03:39:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.143 03:39:19 -- accel/accel.sh@21 -- # val= 00:07:00.143 03:39:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.143 03:39:19 -- accel/accel.sh@21 -- # val= 00:07:00.143 03:39:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.143 03:39:19 -- accel/accel.sh@21 -- # val= 00:07:00.143 03:39:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # IFS=: 00:07:00.143 03:39:19 -- accel/accel.sh@20 -- # read -r var val 00:07:00.143 03:39:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.143 03:39:19 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:00.143 03:39:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.143 00:07:00.143 real 0m2.795s 00:07:00.143 user 0m2.496s 00:07:00.143 sys 0m0.291s 00:07:00.143 03:39:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.143 03:39:19 -- common/autotest_common.sh@10 -- # set +x 00:07:00.143 ************************************ 00:07:00.143 END TEST accel_copy_crc32c 00:07:00.143 ************************************ 00:07:00.403 03:39:19 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.403 03:39:19 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:00.403 03:39:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.403 03:39:19 -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 ************************************ 00:07:00.403 START TEST accel_copy_crc32c_C2 00:07:00.403 ************************************ 00:07:00.403 03:39:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.403 03:39:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.403 03:39:19 -- accel/accel.sh@17 -- # local accel_module 00:07:00.403 03:39:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:00.403 03:39:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:00.403 03:39:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.403 03:39:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.403 03:39:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.403 03:39:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.403 03:39:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.403 03:39:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.403 03:39:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.403 03:39:19 -- accel/accel.sh@42 -- # jq -r . 00:07:00.403 [2024-07-14 03:39:19.110203] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:00.403 [2024-07-14 03:39:19.110282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270453 ] 00:07:00.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.403 [2024-07-14 03:39:19.171416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.403 [2024-07-14 03:39:19.262174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.786 03:39:20 -- accel/accel.sh@18 -- # out=' 00:07:01.786 SPDK Configuration: 00:07:01.786 Core mask: 0x1 00:07:01.786 00:07:01.786 Accel Perf Configuration: 00:07:01.786 Workload Type: copy_crc32c 00:07:01.786 CRC-32C seed: 0 00:07:01.786 Vector size: 4096 bytes 00:07:01.786 Transfer size: 8192 bytes 00:07:01.786 Vector count 2 00:07:01.786 Module: software 00:07:01.786 Queue depth: 32 00:07:01.786 Allocate depth: 32 00:07:01.786 # threads/core: 1 00:07:01.786 Run time: 1 seconds 00:07:01.786 Verify: Yes 00:07:01.786 00:07:01.786 Running for 1 seconds... 00:07:01.786 00:07:01.786 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.786 ------------------------------------------------------------------------------------ 00:07:01.786 0,0 152736/s 1193 MiB/s 0 0 00:07:01.786 ==================================================================================== 00:07:01.786 Total 152736/s 596 MiB/s 0 0' 00:07:01.786 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.786 03:39:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:01.786 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.786 03:39:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:01.786 03:39:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.786 03:39:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.786 03:39:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.786 03:39:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.786 03:39:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.786 03:39:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.786 03:39:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.786 03:39:20 -- accel/accel.sh@42 -- # jq -r . 00:07:01.786 [2024-07-14 03:39:20.512974] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:01.786 [2024-07-14 03:39:20.513052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270597 ] 00:07:01.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.786 [2024-07-14 03:39:20.573684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.786 [2024-07-14 03:39:20.664504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.061 03:39:20 -- accel/accel.sh@21 -- # val= 00:07:02.061 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.061 03:39:20 -- accel/accel.sh@21 -- # val= 00:07:02.061 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.061 03:39:20 -- accel/accel.sh@21 -- # val=0x1 00:07:02.061 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.061 03:39:20 -- accel/accel.sh@21 -- # val= 00:07:02.061 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.061 03:39:20 -- accel/accel.sh@21 -- # val= 00:07:02.061 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.061 03:39:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:02.061 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.061 03:39:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.061 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.061 03:39:20 -- accel/accel.sh@21 -- # val=0 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val= 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val=software 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val=32 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val=32 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val=1 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val=Yes 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val= 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:02.062 03:39:20 -- accel/accel.sh@21 -- # val= 00:07:02.062 03:39:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # IFS=: 00:07:02.062 03:39:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.000 03:39:21 -- accel/accel.sh@21 -- # val= 00:07:03.000 03:39:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.000 03:39:21 -- accel/accel.sh@21 -- # val= 00:07:03.000 03:39:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.000 03:39:21 -- accel/accel.sh@21 -- # val= 00:07:03.000 03:39:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.000 03:39:21 -- accel/accel.sh@21 -- # val= 00:07:03.000 03:39:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.000 03:39:21 -- accel/accel.sh@21 -- # val= 00:07:03.000 03:39:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.000 03:39:21 -- accel/accel.sh@21 -- # val= 00:07:03.000 03:39:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # IFS=: 00:07:03.000 03:39:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.000 03:39:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.000 03:39:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:03.000 03:39:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.000 00:07:03.000 real 0m2.807s 00:07:03.000 user 0m2.522s 00:07:03.000 sys 0m0.278s 00:07:03.000 03:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.000 03:39:21 -- common/autotest_common.sh@10 -- # set +x 00:07:03.000 ************************************ 00:07:03.000 END TEST accel_copy_crc32c_C2 00:07:03.000 ************************************ 00:07:03.000 03:39:21 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:03.000 03:39:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:03.000 03:39:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.000 03:39:21 -- common/autotest_common.sh@10 -- # set +x 00:07:03.000 ************************************ 00:07:03.000 START TEST accel_dualcast 00:07:03.000 ************************************ 00:07:03.000 03:39:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:03.000 03:39:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.000 03:39:21 -- accel/accel.sh@17 -- # local accel_module 00:07:03.000 03:39:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:03.000 03:39:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:03.000 03:39:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.000 03:39:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.000 03:39:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.000 03:39:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.000 03:39:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.000 03:39:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.000 03:39:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.000 03:39:21 -- accel/accel.sh@42 -- # jq -r . 00:07:03.000 [2024-07-14 03:39:21.938963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.000 [2024-07-14 03:39:21.939037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270752 ] 00:07:03.260 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.260 [2024-07-14 03:39:22.002735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.260 [2024-07-14 03:39:22.092543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.634 03:39:23 -- accel/accel.sh@18 -- # out=' 00:07:04.634 SPDK Configuration: 00:07:04.634 Core mask: 0x1 00:07:04.634 00:07:04.634 Accel Perf Configuration: 00:07:04.634 Workload Type: dualcast 00:07:04.634 Transfer size: 4096 bytes 00:07:04.634 Vector count 1 00:07:04.634 Module: software 00:07:04.634 Queue depth: 32 00:07:04.634 Allocate depth: 32 00:07:04.634 # threads/core: 1 00:07:04.634 Run time: 1 seconds 00:07:04.634 Verify: Yes 00:07:04.634 00:07:04.634 Running for 1 seconds... 00:07:04.634 00:07:04.634 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.634 ------------------------------------------------------------------------------------ 00:07:04.634 0,0 295936/s 1156 MiB/s 0 0 00:07:04.634 ==================================================================================== 00:07:04.634 Total 295936/s 1156 MiB/s 0 0' 00:07:04.634 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.634 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.634 03:39:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:04.634 03:39:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:04.634 03:39:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.634 03:39:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.634 03:39:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.634 03:39:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.634 03:39:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.634 03:39:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.634 03:39:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.634 03:39:23 -- accel/accel.sh@42 -- # jq -r . 00:07:04.634 [2024-07-14 03:39:23.336317] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.635 [2024-07-14 03:39:23.336396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270903 ] 00:07:04.635 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.635 [2024-07-14 03:39:23.398935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.635 [2024-07-14 03:39:23.488360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val= 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val= 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val=0x1 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val= 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val= 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val=dualcast 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val= 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val=software 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val=32 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val=32 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val=1 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val=Yes 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val= 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:04.635 03:39:23 -- accel/accel.sh@21 -- # val= 00:07:04.635 03:39:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # IFS=: 00:07:04.635 03:39:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.013 03:39:24 -- accel/accel.sh@21 -- # val= 00:07:06.013 03:39:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.013 03:39:24 -- accel/accel.sh@21 -- # val= 00:07:06.013 03:39:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.013 03:39:24 -- accel/accel.sh@21 -- # val= 00:07:06.013 03:39:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.013 03:39:24 -- accel/accel.sh@21 -- # val= 00:07:06.013 03:39:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.013 03:39:24 -- accel/accel.sh@21 -- # val= 00:07:06.013 03:39:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.013 03:39:24 -- accel/accel.sh@21 -- # val= 00:07:06.013 03:39:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # IFS=: 00:07:06.013 03:39:24 -- accel/accel.sh@20 -- # read -r var val 00:07:06.013 03:39:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.013 03:39:24 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:06.013 03:39:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.013 00:07:06.013 real 0m2.783s 00:07:06.013 user 0m2.502s 00:07:06.013 sys 0m0.273s 00:07:06.013 03:39:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.013 03:39:24 -- common/autotest_common.sh@10 -- # set +x 00:07:06.013 ************************************ 00:07:06.013 END TEST accel_dualcast 00:07:06.013 ************************************ 00:07:06.013 03:39:24 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:06.013 03:39:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.013 03:39:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.013 03:39:24 -- common/autotest_common.sh@10 -- # set +x 00:07:06.013 ************************************ 00:07:06.013 START TEST accel_compare 00:07:06.013 ************************************ 00:07:06.013 03:39:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:06.013 03:39:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.013 03:39:24 -- accel/accel.sh@17 -- # local accel_module 00:07:06.013 03:39:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:06.013 03:39:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.013 03:39:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.013 03:39:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.013 03:39:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.013 03:39:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.013 03:39:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.013 03:39:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.013 03:39:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.013 03:39:24 -- accel/accel.sh@42 -- # jq -r . 00:07:06.013 [2024-07-14 03:39:24.746224] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.013 [2024-07-14 03:39:24.746299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271179 ] 00:07:06.013 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.013 [2024-07-14 03:39:24.808226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.013 [2024-07-14 03:39:24.898721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.391 03:39:26 -- accel/accel.sh@18 -- # out=' 00:07:07.391 SPDK Configuration: 00:07:07.391 Core mask: 0x1 00:07:07.391 00:07:07.391 Accel Perf Configuration: 00:07:07.391 Workload Type: compare 00:07:07.391 Transfer size: 4096 bytes 00:07:07.391 Vector count 1 00:07:07.391 Module: software 00:07:07.391 Queue depth: 32 00:07:07.391 Allocate depth: 32 00:07:07.391 # threads/core: 1 00:07:07.391 Run time: 1 seconds 00:07:07.391 Verify: Yes 00:07:07.391 00:07:07.391 Running for 1 seconds... 00:07:07.391 00:07:07.391 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.391 ------------------------------------------------------------------------------------ 00:07:07.391 0,0 397408/s 1552 MiB/s 0 0 00:07:07.392 ==================================================================================== 00:07:07.392 Total 397408/s 1552 MiB/s 0 0' 00:07:07.392 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.392 03:39:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:07.392 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.392 03:39:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:07.392 03:39:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.392 03:39:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.392 03:39:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.392 03:39:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.392 03:39:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.392 03:39:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.392 03:39:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.392 03:39:26 -- accel/accel.sh@42 -- # jq -r . 00:07:07.392 [2024-07-14 03:39:26.149099] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:07.392 [2024-07-14 03:39:26.149190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271320 ] 00:07:07.392 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.392 [2024-07-14 03:39:26.210040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.392 [2024-07-14 03:39:26.300404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val= 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val= 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val=0x1 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val= 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val= 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val=compare 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val= 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val=software 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val=32 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val=32 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val=1 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val=Yes 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val= 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.652 03:39:26 -- accel/accel.sh@21 -- # val= 00:07:07.652 03:39:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.652 03:39:26 -- accel/accel.sh@20 -- # read -r var val 00:07:09.029 03:39:27 -- accel/accel.sh@21 -- # val= 00:07:09.029 03:39:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.029 03:39:27 -- accel/accel.sh@21 -- # val= 00:07:09.029 03:39:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.029 03:39:27 -- accel/accel.sh@21 -- # val= 00:07:09.029 03:39:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.029 03:39:27 -- accel/accel.sh@21 -- # val= 00:07:09.029 03:39:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.029 03:39:27 -- accel/accel.sh@21 -- # val= 00:07:09.029 03:39:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.029 03:39:27 -- accel/accel.sh@21 -- # val= 00:07:09.029 03:39:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # IFS=: 00:07:09.029 03:39:27 -- accel/accel.sh@20 -- # read -r var val 00:07:09.029 03:39:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.029 03:39:27 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:09.029 03:39:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.029 00:07:09.029 real 0m2.807s 00:07:09.029 user 0m2.522s 00:07:09.029 sys 0m0.277s 00:07:09.029 03:39:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.030 03:39:27 -- common/autotest_common.sh@10 -- # set +x 00:07:09.030 ************************************ 00:07:09.030 END TEST accel_compare 00:07:09.030 ************************************ 00:07:09.030 03:39:27 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:09.030 03:39:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:09.030 03:39:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.030 03:39:27 -- common/autotest_common.sh@10 -- # set +x 00:07:09.030 ************************************ 00:07:09.030 START TEST accel_xor 00:07:09.030 ************************************ 00:07:09.030 03:39:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:09.030 03:39:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.030 03:39:27 -- accel/accel.sh@17 -- # local accel_module 00:07:09.030 03:39:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:09.030 03:39:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:09.030 03:39:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.030 03:39:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.030 03:39:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.030 03:39:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.030 03:39:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.030 03:39:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.030 03:39:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.030 03:39:27 -- accel/accel.sh@42 -- # jq -r . 00:07:09.030 [2024-07-14 03:39:27.575717] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.030 [2024-07-14 03:39:27.575804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271479 ] 00:07:09.030 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.030 [2024-07-14 03:39:27.637506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.030 [2024-07-14 03:39:27.728362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.407 03:39:28 -- accel/accel.sh@18 -- # out=' 00:07:10.407 SPDK Configuration: 00:07:10.407 Core mask: 0x1 00:07:10.407 00:07:10.407 Accel Perf Configuration: 00:07:10.407 Workload Type: xor 00:07:10.407 Source buffers: 2 00:07:10.407 Transfer size: 4096 bytes 00:07:10.407 Vector count 1 00:07:10.407 Module: software 00:07:10.407 Queue depth: 32 00:07:10.407 Allocate depth: 32 00:07:10.407 # threads/core: 1 00:07:10.407 Run time: 1 seconds 00:07:10.407 Verify: Yes 00:07:10.407 00:07:10.407 Running for 1 seconds... 00:07:10.407 00:07:10.407 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.407 ------------------------------------------------------------------------------------ 00:07:10.407 0,0 192960/s 753 MiB/s 0 0 00:07:10.407 ==================================================================================== 00:07:10.407 Total 192960/s 753 MiB/s 0 0' 00:07:10.407 03:39:28 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:10.407 03:39:28 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:10.407 03:39:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.407 03:39:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.407 03:39:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.407 03:39:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.407 03:39:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.407 03:39:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.407 03:39:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.407 03:39:28 -- accel/accel.sh@42 -- # jq -r . 00:07:10.407 [2024-07-14 03:39:28.973503] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.407 [2024-07-14 03:39:28.973584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271621 ] 00:07:10.407 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.407 [2024-07-14 03:39:29.034807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.407 [2024-07-14 03:39:29.127802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val= 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val= 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=0x1 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val= 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val= 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=xor 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=2 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val= 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=software 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=32 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=32 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=1 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val=Yes 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val= 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:10.407 03:39:29 -- accel/accel.sh@21 -- # val= 00:07:10.407 03:39:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # IFS=: 00:07:10.407 03:39:29 -- accel/accel.sh@20 -- # read -r var val 00:07:11.787 03:39:30 -- accel/accel.sh@21 -- # val= 00:07:11.787 03:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.787 03:39:30 -- accel/accel.sh@21 -- # val= 00:07:11.787 03:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.787 03:39:30 -- accel/accel.sh@21 -- # val= 00:07:11.787 03:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.787 03:39:30 -- accel/accel.sh@21 -- # val= 00:07:11.787 03:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.787 03:39:30 -- accel/accel.sh@21 -- # val= 00:07:11.787 03:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.787 03:39:30 -- accel/accel.sh@21 -- # val= 00:07:11.787 03:39:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.787 03:39:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.787 03:39:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.787 03:39:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:11.787 03:39:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.787 00:07:11.787 real 0m2.796s 00:07:11.787 user 0m2.508s 00:07:11.787 sys 0m0.279s 00:07:11.787 03:39:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.787 03:39:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.787 ************************************ 00:07:11.787 END TEST accel_xor 00:07:11.787 ************************************ 00:07:11.787 03:39:30 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:11.787 03:39:30 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:11.787 03:39:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.787 03:39:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.787 ************************************ 00:07:11.787 START TEST accel_xor 00:07:11.787 ************************************ 00:07:11.787 03:39:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:11.787 03:39:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.787 03:39:30 -- accel/accel.sh@17 -- # local accel_module 00:07:11.787 03:39:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:11.787 03:39:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:11.787 03:39:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.787 03:39:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.787 03:39:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.787 03:39:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.787 03:39:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.787 03:39:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.787 03:39:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.787 03:39:30 -- accel/accel.sh@42 -- # jq -r . 00:07:11.787 [2024-07-14 03:39:30.393020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:11.787 [2024-07-14 03:39:30.393092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271906 ] 00:07:11.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.787 [2024-07-14 03:39:30.457840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.787 [2024-07-14 03:39:30.548303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.169 03:39:31 -- accel/accel.sh@18 -- # out=' 00:07:13.169 SPDK Configuration: 00:07:13.169 Core mask: 0x1 00:07:13.169 00:07:13.169 Accel Perf Configuration: 00:07:13.169 Workload Type: xor 00:07:13.169 Source buffers: 3 00:07:13.169 Transfer size: 4096 bytes 00:07:13.169 Vector count 1 00:07:13.169 Module: software 00:07:13.169 Queue depth: 32 00:07:13.169 Allocate depth: 32 00:07:13.169 # threads/core: 1 00:07:13.169 Run time: 1 seconds 00:07:13.169 Verify: Yes 00:07:13.169 00:07:13.169 Running for 1 seconds... 00:07:13.169 00:07:13.169 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.169 ------------------------------------------------------------------------------------ 00:07:13.169 0,0 184768/s 721 MiB/s 0 0 00:07:13.169 ==================================================================================== 00:07:13.169 Total 184768/s 721 MiB/s 0 0' 00:07:13.169 03:39:31 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:13.169 03:39:31 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:13.169 03:39:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.169 03:39:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.169 03:39:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.169 03:39:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.169 03:39:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.169 03:39:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.169 03:39:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.169 03:39:31 -- accel/accel.sh@42 -- # jq -r . 00:07:13.169 [2024-07-14 03:39:31.800806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.169 [2024-07-14 03:39:31.800956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272047 ] 00:07:13.169 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.169 [2024-07-14 03:39:31.862313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.169 [2024-07-14 03:39:31.952371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val= 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val= 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=0x1 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val= 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val= 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=xor 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=3 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val= 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=software 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=32 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=32 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=1 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val=Yes 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val= 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:13.169 03:39:32 -- accel/accel.sh@21 -- # val= 00:07:13.169 03:39:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # IFS=: 00:07:13.169 03:39:32 -- accel/accel.sh@20 -- # read -r var val 00:07:14.548 03:39:33 -- accel/accel.sh@21 -- # val= 00:07:14.548 03:39:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.548 03:39:33 -- accel/accel.sh@21 -- # val= 00:07:14.548 03:39:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.548 03:39:33 -- accel/accel.sh@21 -- # val= 00:07:14.548 03:39:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.548 03:39:33 -- accel/accel.sh@21 -- # val= 00:07:14.548 03:39:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.548 03:39:33 -- accel/accel.sh@21 -- # val= 00:07:14.548 03:39:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.548 03:39:33 -- accel/accel.sh@21 -- # val= 00:07:14.548 03:39:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # IFS=: 00:07:14.548 03:39:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.548 03:39:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.548 03:39:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:14.548 03:39:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.548 00:07:14.548 real 0m2.815s 00:07:14.548 user 0m2.515s 00:07:14.548 sys 0m0.292s 00:07:14.548 03:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.548 03:39:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.548 ************************************ 00:07:14.548 END TEST accel_xor 00:07:14.548 ************************************ 00:07:14.548 03:39:33 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:14.548 03:39:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:14.548 03:39:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.548 03:39:33 -- common/autotest_common.sh@10 -- # set +x 00:07:14.548 ************************************ 00:07:14.548 START TEST accel_dif_verify 00:07:14.548 ************************************ 00:07:14.548 03:39:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:14.548 03:39:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.548 03:39:33 -- accel/accel.sh@17 -- # local accel_module 00:07:14.548 03:39:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:14.548 03:39:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:14.548 03:39:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.548 03:39:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.548 03:39:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.548 03:39:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.548 03:39:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.548 03:39:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.548 03:39:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.548 03:39:33 -- accel/accel.sh@42 -- # jq -r . 00:07:14.548 [2024-07-14 03:39:33.238675] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:14.548 [2024-07-14 03:39:33.238764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272201 ] 00:07:14.548 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.548 [2024-07-14 03:39:33.303105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.548 [2024-07-14 03:39:33.391585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.926 03:39:34 -- accel/accel.sh@18 -- # out=' 00:07:15.926 SPDK Configuration: 00:07:15.926 Core mask: 0x1 00:07:15.926 00:07:15.926 Accel Perf Configuration: 00:07:15.926 Workload Type: dif_verify 00:07:15.926 Vector size: 4096 bytes 00:07:15.926 Transfer size: 4096 bytes 00:07:15.926 Block size: 512 bytes 00:07:15.926 Metadata size: 8 bytes 00:07:15.926 Vector count 1 00:07:15.926 Module: software 00:07:15.926 Queue depth: 32 00:07:15.926 Allocate depth: 32 00:07:15.926 # threads/core: 1 00:07:15.926 Run time: 1 seconds 00:07:15.926 Verify: No 00:07:15.926 00:07:15.926 Running for 1 seconds... 00:07:15.926 00:07:15.926 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.926 ------------------------------------------------------------------------------------ 00:07:15.926 0,0 82048/s 325 MiB/s 0 0 00:07:15.926 ==================================================================================== 00:07:15.926 Total 82048/s 320 MiB/s 0 0' 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:15.926 03:39:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.926 03:39:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.926 03:39:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.926 03:39:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.926 03:39:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.926 03:39:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.926 03:39:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.926 03:39:34 -- accel/accel.sh@42 -- # jq -r . 00:07:15.926 [2024-07-14 03:39:34.644702] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:15.926 [2024-07-14 03:39:34.644771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272347 ] 00:07:15.926 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.926 [2024-07-14 03:39:34.707651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.926 [2024-07-14 03:39:34.801546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val= 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val= 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val=0x1 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val= 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val= 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val=dif_verify 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val= 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:15.926 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:15.926 03:39:34 -- accel/accel.sh@21 -- # val=software 00:07:15.926 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.926 03:39:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.185 03:39:34 -- accel/accel.sh@21 -- # val=32 00:07:16.185 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.185 03:39:34 -- accel/accel.sh@21 -- # val=32 00:07:16.185 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.185 03:39:34 -- accel/accel.sh@21 -- # val=1 00:07:16.185 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.185 03:39:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.185 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.185 03:39:34 -- accel/accel.sh@21 -- # val=No 00:07:16.185 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.185 03:39:34 -- accel/accel.sh@21 -- # val= 00:07:16.185 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:16.185 03:39:34 -- accel/accel.sh@21 -- # val= 00:07:16.185 03:39:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # IFS=: 00:07:16.185 03:39:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.124 03:39:36 -- accel/accel.sh@21 -- # val= 00:07:17.124 03:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.124 03:39:36 -- accel/accel.sh@21 -- # val= 00:07:17.124 03:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.124 03:39:36 -- accel/accel.sh@21 -- # val= 00:07:17.124 03:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.124 03:39:36 -- accel/accel.sh@21 -- # val= 00:07:17.124 03:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.124 03:39:36 -- accel/accel.sh@21 -- # val= 00:07:17.124 03:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.124 03:39:36 -- accel/accel.sh@21 -- # val= 00:07:17.124 03:39:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.124 03:39:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.124 03:39:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.124 03:39:36 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:17.124 03:39:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.124 00:07:17.124 real 0m2.802s 00:07:17.124 user 0m2.503s 00:07:17.124 sys 0m0.294s 00:07:17.124 03:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.124 03:39:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.124 ************************************ 00:07:17.124 END TEST accel_dif_verify 00:07:17.124 ************************************ 00:07:17.124 03:39:36 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:17.124 03:39:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:17.124 03:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.124 03:39:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.124 ************************************ 00:07:17.124 START TEST accel_dif_generate 00:07:17.124 ************************************ 00:07:17.124 03:39:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:17.124 03:39:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.124 03:39:36 -- accel/accel.sh@17 -- # local accel_module 00:07:17.124 03:39:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:17.124 03:39:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:17.124 03:39:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.124 03:39:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.124 03:39:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.124 03:39:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.124 03:39:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.124 03:39:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.124 03:39:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.124 03:39:36 -- accel/accel.sh@42 -- # jq -r . 00:07:17.124 [2024-07-14 03:39:36.059816] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.124 [2024-07-14 03:39:36.059930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272629 ] 00:07:17.382 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.382 [2024-07-14 03:39:36.122425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.382 [2024-07-14 03:39:36.213157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.785 03:39:37 -- accel/accel.sh@18 -- # out=' 00:07:18.785 SPDK Configuration: 00:07:18.785 Core mask: 0x1 00:07:18.785 00:07:18.785 Accel Perf Configuration: 00:07:18.785 Workload Type: dif_generate 00:07:18.785 Vector size: 4096 bytes 00:07:18.785 Transfer size: 4096 bytes 00:07:18.785 Block size: 512 bytes 00:07:18.785 Metadata size: 8 bytes 00:07:18.785 Vector count 1 00:07:18.785 Module: software 00:07:18.785 Queue depth: 32 00:07:18.785 Allocate depth: 32 00:07:18.785 # threads/core: 1 00:07:18.785 Run time: 1 seconds 00:07:18.785 Verify: No 00:07:18.785 00:07:18.785 Running for 1 seconds... 00:07:18.785 00:07:18.785 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.785 ------------------------------------------------------------------------------------ 00:07:18.785 0,0 96192/s 381 MiB/s 0 0 00:07:18.785 ==================================================================================== 00:07:18.785 Total 96192/s 375 MiB/s 0 0' 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:18.785 03:39:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.785 03:39:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.785 03:39:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.785 03:39:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.785 03:39:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.785 03:39:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.785 03:39:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.785 03:39:37 -- accel/accel.sh@42 -- # jq -r . 00:07:18.785 [2024-07-14 03:39:37.463608] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:18.785 [2024-07-14 03:39:37.463678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272778 ] 00:07:18.785 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.785 [2024-07-14 03:39:37.523685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.785 [2024-07-14 03:39:37.614428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val= 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val= 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val=0x1 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val= 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val= 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val=dif_generate 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val= 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val=software 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val=32 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val=32 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val=1 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val=No 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val= 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.785 03:39:37 -- accel/accel.sh@21 -- # val= 00:07:18.785 03:39:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # IFS=: 00:07:18.785 03:39:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.167 03:39:38 -- accel/accel.sh@21 -- # val= 00:07:20.167 03:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # IFS=: 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # read -r var val 00:07:20.167 03:39:38 -- accel/accel.sh@21 -- # val= 00:07:20.167 03:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # IFS=: 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # read -r var val 00:07:20.167 03:39:38 -- accel/accel.sh@21 -- # val= 00:07:20.167 03:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # IFS=: 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # read -r var val 00:07:20.167 03:39:38 -- accel/accel.sh@21 -- # val= 00:07:20.167 03:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # IFS=: 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # read -r var val 00:07:20.167 03:39:38 -- accel/accel.sh@21 -- # val= 00:07:20.167 03:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # IFS=: 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # read -r var val 00:07:20.167 03:39:38 -- accel/accel.sh@21 -- # val= 00:07:20.167 03:39:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # IFS=: 00:07:20.167 03:39:38 -- accel/accel.sh@20 -- # read -r var val 00:07:20.167 03:39:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.167 03:39:38 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:20.167 03:39:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.167 00:07:20.167 real 0m2.810s 00:07:20.167 user 0m2.525s 00:07:20.167 sys 0m0.279s 00:07:20.167 03:39:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.167 03:39:38 -- common/autotest_common.sh@10 -- # set +x 00:07:20.167 ************************************ 00:07:20.167 END TEST accel_dif_generate 00:07:20.167 ************************************ 00:07:20.167 03:39:38 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:20.167 03:39:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:20.167 03:39:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.167 03:39:38 -- common/autotest_common.sh@10 -- # set +x 00:07:20.167 ************************************ 00:07:20.167 START TEST accel_dif_generate_copy 00:07:20.167 ************************************ 00:07:20.167 03:39:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:20.167 03:39:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.167 03:39:38 -- accel/accel.sh@17 -- # local accel_module 00:07:20.167 03:39:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:20.167 03:39:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:20.167 03:39:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.167 03:39:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.167 03:39:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.167 03:39:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.167 03:39:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.167 03:39:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.167 03:39:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.167 03:39:38 -- accel/accel.sh@42 -- # jq -r . 00:07:20.167 [2024-07-14 03:39:38.898681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:20.167 [2024-07-14 03:39:38.898770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272931 ] 00:07:20.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.167 [2024-07-14 03:39:38.962664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.167 [2024-07-14 03:39:39.050976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.547 03:39:40 -- accel/accel.sh@18 -- # out=' 00:07:21.547 SPDK Configuration: 00:07:21.547 Core mask: 0x1 00:07:21.547 00:07:21.547 Accel Perf Configuration: 00:07:21.547 Workload Type: dif_generate_copy 00:07:21.547 Vector size: 4096 bytes 00:07:21.547 Transfer size: 4096 bytes 00:07:21.547 Vector count 1 00:07:21.547 Module: software 00:07:21.547 Queue depth: 32 00:07:21.547 Allocate depth: 32 00:07:21.547 # threads/core: 1 00:07:21.547 Run time: 1 seconds 00:07:21.547 Verify: No 00:07:21.547 00:07:21.547 Running for 1 seconds... 00:07:21.547 00:07:21.547 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.547 ------------------------------------------------------------------------------------ 00:07:21.547 0,0 77504/s 307 MiB/s 0 0 00:07:21.547 ==================================================================================== 00:07:21.547 Total 77504/s 302 MiB/s 0 0' 00:07:21.547 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.547 03:39:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:21.547 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.547 03:39:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:21.547 03:39:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.547 03:39:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.547 03:39:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.547 03:39:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.547 03:39:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.547 03:39:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.547 03:39:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.547 03:39:40 -- accel/accel.sh@42 -- # jq -r . 00:07:21.547 [2024-07-14 03:39:40.302239] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:21.547 [2024-07-14 03:39:40.302312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273084 ] 00:07:21.547 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.547 [2024-07-14 03:39:40.363171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.547 [2024-07-14 03:39:40.453670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val= 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val= 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val=0x1 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val= 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val= 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val= 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val=software 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val=32 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val=32 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val=1 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val=No 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val= 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.807 03:39:40 -- accel/accel.sh@21 -- # val= 00:07:21.807 03:39:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.807 03:39:40 -- accel/accel.sh@20 -- # read -r var val 00:07:22.746 03:39:41 -- accel/accel.sh@21 -- # val= 00:07:22.746 03:39:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.746 03:39:41 -- accel/accel.sh@21 -- # val= 00:07:22.746 03:39:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.746 03:39:41 -- accel/accel.sh@21 -- # val= 00:07:22.746 03:39:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.746 03:39:41 -- accel/accel.sh@21 -- # val= 00:07:22.746 03:39:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.746 03:39:41 -- accel/accel.sh@21 -- # val= 00:07:22.746 03:39:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.746 03:39:41 -- accel/accel.sh@21 -- # val= 00:07:22.746 03:39:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.746 03:39:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.746 03:39:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.746 03:39:41 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:22.746 03:39:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.746 00:07:22.746 real 0m2.803s 00:07:22.746 user 0m2.514s 00:07:22.746 sys 0m0.280s 00:07:22.746 03:39:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.746 03:39:41 -- common/autotest_common.sh@10 -- # set +x 00:07:22.746 ************************************ 00:07:22.746 END TEST accel_dif_generate_copy 00:07:22.746 ************************************ 00:07:23.006 03:39:41 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:23.006 03:39:41 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.006 03:39:41 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:23.006 03:39:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.006 03:39:41 -- common/autotest_common.sh@10 -- # set +x 00:07:23.006 ************************************ 00:07:23.006 START TEST accel_comp 00:07:23.006 ************************************ 00:07:23.006 03:39:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.006 03:39:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.006 03:39:41 -- accel/accel.sh@17 -- # local accel_module 00:07:23.007 03:39:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.007 03:39:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.007 03:39:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.007 03:39:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.007 03:39:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.007 03:39:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.007 03:39:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.007 03:39:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.007 03:39:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.007 03:39:41 -- accel/accel.sh@42 -- # jq -r . 00:07:23.007 [2024-07-14 03:39:41.724343] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:23.007 [2024-07-14 03:39:41.724416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273355 ] 00:07:23.007 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.007 [2024-07-14 03:39:41.786084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.007 [2024-07-14 03:39:41.874348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.387 03:39:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:24.387 00:07:24.387 SPDK Configuration: 00:07:24.387 Core mask: 0x1 00:07:24.387 00:07:24.387 Accel Perf Configuration: 00:07:24.387 Workload Type: compress 00:07:24.387 Transfer size: 4096 bytes 00:07:24.387 Vector count 1 00:07:24.387 Module: software 00:07:24.387 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.387 Queue depth: 32 00:07:24.387 Allocate depth: 32 00:07:24.387 # threads/core: 1 00:07:24.387 Run time: 1 seconds 00:07:24.387 Verify: No 00:07:24.387 00:07:24.387 Running for 1 seconds... 00:07:24.387 00:07:24.387 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.387 ------------------------------------------------------------------------------------ 00:07:24.387 0,0 32480/s 135 MiB/s 0 0 00:07:24.387 ==================================================================================== 00:07:24.387 Total 32480/s 126 MiB/s 0 0' 00:07:24.387 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.387 03:39:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.387 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.387 03:39:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.387 03:39:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.387 03:39:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.387 03:39:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.387 03:39:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.387 03:39:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.387 03:39:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.387 03:39:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.387 03:39:43 -- accel/accel.sh@42 -- # jq -r . 00:07:24.387 [2024-07-14 03:39:43.125932] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:24.387 [2024-07-14 03:39:43.126006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273498 ] 00:07:24.387 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.387 [2024-07-14 03:39:43.189703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.387 [2024-07-14 03:39:43.279998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=0x1 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=compress 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=software 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=32 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=32 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=1 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val=No 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.647 03:39:43 -- accel/accel.sh@21 -- # val= 00:07:24.647 03:39:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.647 03:39:43 -- accel/accel.sh@20 -- # read -r var val 00:07:25.586 03:39:44 -- accel/accel.sh@21 -- # val= 00:07:25.586 03:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # IFS=: 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.586 03:39:44 -- accel/accel.sh@21 -- # val= 00:07:25.586 03:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # IFS=: 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.586 03:39:44 -- accel/accel.sh@21 -- # val= 00:07:25.586 03:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # IFS=: 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.586 03:39:44 -- accel/accel.sh@21 -- # val= 00:07:25.586 03:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # IFS=: 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.586 03:39:44 -- accel/accel.sh@21 -- # val= 00:07:25.586 03:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # IFS=: 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.586 03:39:44 -- accel/accel.sh@21 -- # val= 00:07:25.586 03:39:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # IFS=: 00:07:25.586 03:39:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.586 03:39:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.586 03:39:44 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:25.586 03:39:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.586 00:07:25.586 real 0m2.814s 00:07:25.586 user 0m2.506s 00:07:25.586 sys 0m0.302s 00:07:25.586 03:39:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.586 03:39:44 -- common/autotest_common.sh@10 -- # set +x 00:07:25.586 ************************************ 00:07:25.586 END TEST accel_comp 00:07:25.586 ************************************ 00:07:25.846 03:39:44 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.846 03:39:44 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:25.846 03:39:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.846 03:39:44 -- common/autotest_common.sh@10 -- # set +x 00:07:25.846 ************************************ 00:07:25.846 START TEST accel_decomp 00:07:25.846 ************************************ 00:07:25.846 03:39:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.846 03:39:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.846 03:39:44 -- accel/accel.sh@17 -- # local accel_module 00:07:25.846 03:39:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.846 03:39:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.846 03:39:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.846 03:39:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.846 03:39:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.846 03:39:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.846 03:39:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.846 03:39:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.847 03:39:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.847 03:39:44 -- accel/accel.sh@42 -- # jq -r . 00:07:25.847 [2024-07-14 03:39:44.558554] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:25.847 [2024-07-14 03:39:44.558633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273658 ] 00:07:25.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.847 [2024-07-14 03:39:44.620691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.847 [2024-07-14 03:39:44.711570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.225 03:39:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.225 00:07:27.225 SPDK Configuration: 00:07:27.225 Core mask: 0x1 00:07:27.225 00:07:27.225 Accel Perf Configuration: 00:07:27.225 Workload Type: decompress 00:07:27.225 Transfer size: 4096 bytes 00:07:27.225 Vector count 1 00:07:27.225 Module: software 00:07:27.225 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.225 Queue depth: 32 00:07:27.225 Allocate depth: 32 00:07:27.225 # threads/core: 1 00:07:27.226 Run time: 1 seconds 00:07:27.226 Verify: Yes 00:07:27.226 00:07:27.226 Running for 1 seconds... 00:07:27.226 00:07:27.226 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.226 ------------------------------------------------------------------------------------ 00:07:27.226 0,0 55648/s 102 MiB/s 0 0 00:07:27.226 ==================================================================================== 00:07:27.226 Total 55648/s 217 MiB/s 0 0' 00:07:27.226 03:39:45 -- accel/accel.sh@20 -- # IFS=: 00:07:27.226 03:39:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.226 03:39:45 -- accel/accel.sh@20 -- # read -r var val 00:07:27.226 03:39:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.226 03:39:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.226 03:39:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.226 03:39:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.226 03:39:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.226 03:39:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.226 03:39:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.226 03:39:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.226 03:39:45 -- accel/accel.sh@42 -- # jq -r . 00:07:27.226 [2024-07-14 03:39:45.962176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:27.226 [2024-07-14 03:39:45.962258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273807 ] 00:07:27.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.226 [2024-07-14 03:39:46.023424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.226 [2024-07-14 03:39:46.112714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=0x1 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=decompress 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=software 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=32 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=32 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=1 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val=Yes 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.483 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.483 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.483 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.484 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:27.484 03:39:46 -- accel/accel.sh@21 -- # val= 00:07:27.484 03:39:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.484 03:39:46 -- accel/accel.sh@20 -- # IFS=: 00:07:27.484 03:39:46 -- accel/accel.sh@20 -- # read -r var val 00:07:28.421 03:39:47 -- accel/accel.sh@21 -- # val= 00:07:28.421 03:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.421 03:39:47 -- accel/accel.sh@21 -- # val= 00:07:28.421 03:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.421 03:39:47 -- accel/accel.sh@21 -- # val= 00:07:28.421 03:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.421 03:39:47 -- accel/accel.sh@21 -- # val= 00:07:28.421 03:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.421 03:39:47 -- accel/accel.sh@21 -- # val= 00:07:28.421 03:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.421 03:39:47 -- accel/accel.sh@21 -- # val= 00:07:28.421 03:39:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.421 03:39:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.421 03:39:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.421 03:39:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.421 03:39:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.421 00:07:28.421 real 0m2.799s 00:07:28.421 user 0m2.509s 00:07:28.421 sys 0m0.284s 00:07:28.421 03:39:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.421 03:39:47 -- common/autotest_common.sh@10 -- # set +x 00:07:28.421 ************************************ 00:07:28.421 END TEST accel_decomp 00:07:28.421 ************************************ 00:07:28.680 03:39:47 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.680 03:39:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:28.680 03:39:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.680 03:39:47 -- common/autotest_common.sh@10 -- # set +x 00:07:28.680 ************************************ 00:07:28.680 START TEST accel_decmop_full 00:07:28.680 ************************************ 00:07:28.680 03:39:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.680 03:39:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.680 03:39:47 -- accel/accel.sh@17 -- # local accel_module 00:07:28.680 03:39:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.680 03:39:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.680 03:39:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.680 03:39:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.680 03:39:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.680 03:39:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.680 03:39:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.680 03:39:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.680 03:39:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.680 03:39:47 -- accel/accel.sh@42 -- # jq -r . 00:07:28.680 [2024-07-14 03:39:47.379545] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.680 [2024-07-14 03:39:47.379635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274072 ] 00:07:28.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.680 [2024-07-14 03:39:47.440620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.680 [2024-07-14 03:39:47.531376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.059 03:39:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.059 00:07:30.059 SPDK Configuration: 00:07:30.059 Core mask: 0x1 00:07:30.059 00:07:30.059 Accel Perf Configuration: 00:07:30.059 Workload Type: decompress 00:07:30.059 Transfer size: 111250 bytes 00:07:30.059 Vector count 1 00:07:30.059 Module: software 00:07:30.059 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.059 Queue depth: 32 00:07:30.059 Allocate depth: 32 00:07:30.059 # threads/core: 1 00:07:30.059 Run time: 1 seconds 00:07:30.059 Verify: Yes 00:07:30.059 00:07:30.059 Running for 1 seconds... 00:07:30.059 00:07:30.059 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.059 ------------------------------------------------------------------------------------ 00:07:30.059 0,0 3808/s 157 MiB/s 0 0 00:07:30.059 ==================================================================================== 00:07:30.059 Total 3808/s 404 MiB/s 0 0' 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.059 03:39:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.059 03:39:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.059 03:39:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.059 03:39:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.059 03:39:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.059 03:39:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.059 03:39:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.059 03:39:48 -- accel/accel.sh@42 -- # jq -r . 00:07:30.059 [2024-07-14 03:39:48.780607] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:30.059 [2024-07-14 03:39:48.780689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274224 ] 00:07:30.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.059 [2024-07-14 03:39:48.840696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.059 [2024-07-14 03:39:48.930829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val= 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val= 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val= 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val=0x1 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val= 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val= 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val=decompress 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.059 03:39:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.059 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.059 03:39:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:30.059 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.060 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.060 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.060 03:39:48 -- accel/accel.sh@21 -- # val= 00:07:30.060 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.060 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:48 -- accel/accel.sh@21 -- # val=software 00:07:30.319 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.319 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.319 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:48 -- accel/accel.sh@21 -- # val=32 00:07:30.319 03:39:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:48 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:48 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:48 -- accel/accel.sh@21 -- # val=32 00:07:30.319 03:39:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:49 -- accel/accel.sh@21 -- # val=1 00:07:30.319 03:39:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.319 03:39:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:49 -- accel/accel.sh@21 -- # val=Yes 00:07:30.319 03:39:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:49 -- accel/accel.sh@21 -- # val= 00:07:30.319 03:39:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.319 03:39:49 -- accel/accel.sh@21 -- # val= 00:07:30.319 03:39:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.319 03:39:49 -- accel/accel.sh@20 -- # read -r var val 00:07:31.257 03:39:50 -- accel/accel.sh@21 -- # val= 00:07:31.257 03:39:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.257 03:39:50 -- accel/accel.sh@21 -- # val= 00:07:31.257 03:39:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.257 03:39:50 -- accel/accel.sh@21 -- # val= 00:07:31.257 03:39:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.257 03:39:50 -- accel/accel.sh@21 -- # val= 00:07:31.257 03:39:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.257 03:39:50 -- accel/accel.sh@21 -- # val= 00:07:31.257 03:39:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.257 03:39:50 -- accel/accel.sh@21 -- # val= 00:07:31.257 03:39:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.257 03:39:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.257 03:39:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.257 03:39:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.257 03:39:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.257 00:07:31.257 real 0m2.818s 00:07:31.257 user 0m2.530s 00:07:31.257 sys 0m0.282s 00:07:31.257 03:39:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.257 03:39:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.257 ************************************ 00:07:31.257 END TEST accel_decmop_full 00:07:31.257 ************************************ 00:07:31.517 03:39:50 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.517 03:39:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:31.517 03:39:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.517 03:39:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.517 ************************************ 00:07:31.517 START TEST accel_decomp_mcore 00:07:31.517 ************************************ 00:07:31.517 03:39:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.517 03:39:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.517 03:39:50 -- accel/accel.sh@17 -- # local accel_module 00:07:31.517 03:39:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.517 03:39:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.517 03:39:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.517 03:39:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.517 03:39:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.517 03:39:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.517 03:39:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.517 03:39:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.517 03:39:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.517 03:39:50 -- accel/accel.sh@42 -- # jq -r . 00:07:31.517 [2024-07-14 03:39:50.224216] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:31.517 [2024-07-14 03:39:50.224291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274379 ] 00:07:31.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.517 [2024-07-14 03:39:50.287052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.517 [2024-07-14 03:39:50.382043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.517 [2024-07-14 03:39:50.382109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.517 [2024-07-14 03:39:50.382131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.517 [2024-07-14 03:39:50.382133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.897 03:39:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.897 00:07:32.897 SPDK Configuration: 00:07:32.897 Core mask: 0xf 00:07:32.897 00:07:32.897 Accel Perf Configuration: 00:07:32.897 Workload Type: decompress 00:07:32.897 Transfer size: 4096 bytes 00:07:32.897 Vector count 1 00:07:32.897 Module: software 00:07:32.897 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.897 Queue depth: 32 00:07:32.897 Allocate depth: 32 00:07:32.897 # threads/core: 1 00:07:32.897 Run time: 1 seconds 00:07:32.897 Verify: Yes 00:07:32.897 00:07:32.897 Running for 1 seconds... 00:07:32.897 00:07:32.897 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.897 ------------------------------------------------------------------------------------ 00:07:32.897 0,0 57120/s 105 MiB/s 0 0 00:07:32.897 3,0 57728/s 106 MiB/s 0 0 00:07:32.897 2,0 57632/s 106 MiB/s 0 0 00:07:32.897 1,0 57248/s 105 MiB/s 0 0 00:07:32.897 ==================================================================================== 00:07:32.897 Total 229728/s 897 MiB/s 0 0' 00:07:32.897 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:32.897 03:39:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.897 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:32.897 03:39:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.897 03:39:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.897 03:39:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.897 03:39:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.897 03:39:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.897 03:39:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.897 03:39:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.897 03:39:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.897 03:39:51 -- accel/accel.sh@42 -- # jq -r . 00:07:32.897 [2024-07-14 03:39:51.638976] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:32.897 [2024-07-14 03:39:51.639054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274526 ] 00:07:32.897 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.897 [2024-07-14 03:39:51.701950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.897 [2024-07-14 03:39:51.793895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.897 [2024-07-14 03:39:51.793949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.897 [2024-07-14 03:39:51.794033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.897 [2024-07-14 03:39:51.794035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=0xf 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=decompress 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=software 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=32 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=32 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=1 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val=Yes 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:33.156 03:39:51 -- accel/accel.sh@21 -- # val= 00:07:33.156 03:39:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # IFS=: 00:07:33.156 03:39:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@21 -- # val= 00:07:34.117 03:39:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.117 03:39:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.117 03:39:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.117 03:39:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.117 03:39:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.117 00:07:34.117 real 0m2.826s 00:07:34.117 user 0m9.411s 00:07:34.117 sys 0m0.296s 00:07:34.117 03:39:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.117 03:39:53 -- common/autotest_common.sh@10 -- # set +x 00:07:34.117 ************************************ 00:07:34.117 END TEST accel_decomp_mcore 00:07:34.117 ************************************ 00:07:34.429 03:39:53 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.429 03:39:53 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:34.429 03:39:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.429 03:39:53 -- common/autotest_common.sh@10 -- # set +x 00:07:34.429 ************************************ 00:07:34.429 START TEST accel_decomp_full_mcore 00:07:34.429 ************************************ 00:07:34.429 03:39:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.429 03:39:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.429 03:39:53 -- accel/accel.sh@17 -- # local accel_module 00:07:34.429 03:39:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.429 03:39:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.429 03:39:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.429 03:39:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.429 03:39:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.429 03:39:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.429 03:39:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.429 03:39:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.429 03:39:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.429 03:39:53 -- accel/accel.sh@42 -- # jq -r . 00:07:34.429 [2024-07-14 03:39:53.071481] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:34.429 [2024-07-14 03:39:53.071563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274803 ] 00:07:34.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.429 [2024-07-14 03:39:53.134819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.429 [2024-07-14 03:39:53.233274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.429 [2024-07-14 03:39:53.233330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.429 [2024-07-14 03:39:53.233450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.429 [2024-07-14 03:39:53.233453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.806 03:39:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.806 00:07:35.806 SPDK Configuration: 00:07:35.806 Core mask: 0xf 00:07:35.806 00:07:35.806 Accel Perf Configuration: 00:07:35.806 Workload Type: decompress 00:07:35.806 Transfer size: 111250 bytes 00:07:35.806 Vector count 1 00:07:35.806 Module: software 00:07:35.806 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.806 Queue depth: 32 00:07:35.806 Allocate depth: 32 00:07:35.806 # threads/core: 1 00:07:35.806 Run time: 1 seconds 00:07:35.806 Verify: Yes 00:07:35.806 00:07:35.806 Running for 1 seconds... 00:07:35.806 00:07:35.806 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.806 ------------------------------------------------------------------------------------ 00:07:35.806 0,0 4288/s 177 MiB/s 0 0 00:07:35.806 3,0 4288/s 177 MiB/s 0 0 00:07:35.806 2,0 4288/s 177 MiB/s 0 0 00:07:35.806 1,0 4288/s 177 MiB/s 0 0 00:07:35.806 ==================================================================================== 00:07:35.806 Total 17152/s 1819 MiB/s 0 0' 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.806 03:39:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.806 03:39:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.806 03:39:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.806 03:39:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.806 03:39:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.806 03:39:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.806 03:39:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.806 03:39:54 -- accel/accel.sh@42 -- # jq -r . 00:07:35.806 [2024-07-14 03:39:54.482855] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:35.806 [2024-07-14 03:39:54.482965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274955 ] 00:07:35.806 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.806 [2024-07-14 03:39:54.545309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.806 [2024-07-14 03:39:54.638800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.806 [2024-07-14 03:39:54.638852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.806 [2024-07-14 03:39:54.638919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.806 [2024-07-14 03:39:54.638922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=0xf 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=decompress 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=software 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=32 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=32 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=1 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val=Yes 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.806 03:39:54 -- accel/accel.sh@21 -- # val= 00:07:35.806 03:39:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.806 03:39:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@21 -- # val= 00:07:37.189 03:39:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # IFS=: 00:07:37.189 03:39:55 -- accel/accel.sh@20 -- # read -r var val 00:07:37.189 03:39:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.189 03:39:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.189 03:39:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.189 00:07:37.189 real 0m2.836s 00:07:37.189 user 0m9.463s 00:07:37.189 sys 0m0.296s 00:07:37.189 03:39:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.189 03:39:55 -- common/autotest_common.sh@10 -- # set +x 00:07:37.189 ************************************ 00:07:37.189 END TEST accel_decomp_full_mcore 00:07:37.189 ************************************ 00:07:37.189 03:39:55 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.189 03:39:55 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:37.189 03:39:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.189 03:39:55 -- common/autotest_common.sh@10 -- # set +x 00:07:37.189 ************************************ 00:07:37.189 START TEST accel_decomp_mthread 00:07:37.189 ************************************ 00:07:37.189 03:39:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.189 03:39:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.189 03:39:55 -- accel/accel.sh@17 -- # local accel_module 00:07:37.189 03:39:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.189 03:39:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.189 03:39:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.189 03:39:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.189 03:39:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.189 03:39:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.189 03:39:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.189 03:39:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.189 03:39:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.189 03:39:55 -- accel/accel.sh@42 -- # jq -r . 00:07:37.189 [2024-07-14 03:39:55.932056] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:37.189 [2024-07-14 03:39:55.932136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275120 ] 00:07:37.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.189 [2024-07-14 03:39:55.998037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.189 [2024-07-14 03:39:56.088773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.570 03:39:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.570 00:07:38.570 SPDK Configuration: 00:07:38.570 Core mask: 0x1 00:07:38.570 00:07:38.570 Accel Perf Configuration: 00:07:38.570 Workload Type: decompress 00:07:38.570 Transfer size: 4096 bytes 00:07:38.570 Vector count 1 00:07:38.570 Module: software 00:07:38.570 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.570 Queue depth: 32 00:07:38.570 Allocate depth: 32 00:07:38.570 # threads/core: 2 00:07:38.570 Run time: 1 seconds 00:07:38.570 Verify: Yes 00:07:38.570 00:07:38.570 Running for 1 seconds... 00:07:38.570 00:07:38.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.570 ------------------------------------------------------------------------------------ 00:07:38.570 0,1 28128/s 51 MiB/s 0 0 00:07:38.570 0,0 28032/s 51 MiB/s 0 0 00:07:38.570 ==================================================================================== 00:07:38.570 Total 56160/s 219 MiB/s 0 0' 00:07:38.570 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.570 03:39:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.570 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.570 03:39:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:38.570 03:39:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.570 03:39:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.570 03:39:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.570 03:39:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.570 03:39:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.570 03:39:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.570 03:39:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.570 03:39:57 -- accel/accel.sh@42 -- # jq -r . 00:07:38.570 [2024-07-14 03:39:57.352064] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:38.570 [2024-07-14 03:39:57.352158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275258 ] 00:07:38.570 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.570 [2024-07-14 03:39:57.415272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.570 [2024-07-14 03:39:57.504111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=0x1 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=decompress 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=software 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=32 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=32 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=2 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val=Yes 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:38.830 03:39:57 -- accel/accel.sh@21 -- # val= 00:07:38.830 03:39:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # IFS=: 00:07:38.830 03:39:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@21 -- # val= 00:07:40.212 03:39:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # IFS=: 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@21 -- # val= 00:07:40.212 03:39:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # IFS=: 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@21 -- # val= 00:07:40.212 03:39:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # IFS=: 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@21 -- # val= 00:07:40.212 03:39:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # IFS=: 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@21 -- # val= 00:07:40.212 03:39:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # IFS=: 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@21 -- # val= 00:07:40.212 03:39:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # IFS=: 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@21 -- # val= 00:07:40.212 03:39:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # IFS=: 00:07:40.212 03:39:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.212 03:39:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.212 03:39:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.212 03:39:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.212 00:07:40.212 real 0m2.833s 00:07:40.212 user 0m2.521s 00:07:40.212 sys 0m0.306s 00:07:40.212 03:39:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.212 03:39:58 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 ************************************ 00:07:40.212 END TEST accel_decomp_mthread 00:07:40.212 ************************************ 00:07:40.212 03:39:58 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.212 03:39:58 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:40.212 03:39:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.212 03:39:58 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 ************************************ 00:07:40.212 START TEST accel_deomp_full_mthread 00:07:40.212 ************************************ 00:07:40.212 03:39:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.212 03:39:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.212 03:39:58 -- accel/accel.sh@17 -- # local accel_module 00:07:40.213 03:39:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.213 03:39:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:40.213 03:39:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.213 03:39:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.213 03:39:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.213 03:39:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.213 03:39:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.213 03:39:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.213 03:39:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.213 03:39:58 -- accel/accel.sh@42 -- # jq -r . 00:07:40.213 [2024-07-14 03:39:58.790659] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:40.213 [2024-07-14 03:39:58.790737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275538 ] 00:07:40.213 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.213 [2024-07-14 03:39:58.852462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.213 [2024-07-14 03:39:58.941462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.593 03:40:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.593 00:07:41.593 SPDK Configuration: 00:07:41.593 Core mask: 0x1 00:07:41.593 00:07:41.593 Accel Perf Configuration: 00:07:41.593 Workload Type: decompress 00:07:41.593 Transfer size: 111250 bytes 00:07:41.593 Vector count 1 00:07:41.593 Module: software 00:07:41.593 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.593 Queue depth: 32 00:07:41.593 Allocate depth: 32 00:07:41.593 # threads/core: 2 00:07:41.593 Run time: 1 seconds 00:07:41.593 Verify: Yes 00:07:41.593 00:07:41.593 Running for 1 seconds... 00:07:41.593 00:07:41.593 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.593 ------------------------------------------------------------------------------------ 00:07:41.593 0,1 1952/s 80 MiB/s 0 0 00:07:41.593 0,0 1920/s 79 MiB/s 0 0 00:07:41.593 ==================================================================================== 00:07:41.593 Total 3872/s 410 MiB/s 0 0' 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:41.593 03:40:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.593 03:40:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.593 03:40:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.593 03:40:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.593 03:40:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.593 03:40:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.593 03:40:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.593 03:40:00 -- accel/accel.sh@42 -- # jq -r . 00:07:41.593 [2024-07-14 03:40:00.216888] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:41.593 [2024-07-14 03:40:00.216980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275686 ] 00:07:41.593 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.593 [2024-07-14 03:40:00.278006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.593 [2024-07-14 03:40:00.368373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val=0x1 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val=decompress 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val=software 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val=32 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.593 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.593 03:40:00 -- accel/accel.sh@21 -- # val=32 00:07:41.593 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.594 03:40:00 -- accel/accel.sh@21 -- # val=2 00:07:41.594 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.594 03:40:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.594 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.594 03:40:00 -- accel/accel.sh@21 -- # val=Yes 00:07:41.594 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.594 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.594 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.594 03:40:00 -- accel/accel.sh@21 -- # val= 00:07:41.594 03:40:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.594 03:40:00 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@21 -- # val= 00:07:42.972 03:40:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # IFS=: 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@21 -- # val= 00:07:42.972 03:40:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # IFS=: 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@21 -- # val= 00:07:42.972 03:40:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # IFS=: 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@21 -- # val= 00:07:42.972 03:40:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # IFS=: 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@21 -- # val= 00:07:42.972 03:40:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # IFS=: 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@21 -- # val= 00:07:42.972 03:40:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # IFS=: 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@21 -- # val= 00:07:42.972 03:40:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # IFS=: 00:07:42.972 03:40:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.972 03:40:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.973 03:40:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:42.973 03:40:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.973 00:07:42.973 real 0m2.870s 00:07:42.973 user 0m2.583s 00:07:42.973 sys 0m0.281s 00:07:42.973 03:40:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.973 03:40:01 -- common/autotest_common.sh@10 -- # set +x 00:07:42.973 ************************************ 00:07:42.973 END TEST accel_deomp_full_mthread 00:07:42.973 ************************************ 00:07:42.973 03:40:01 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:42.973 03:40:01 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.973 03:40:01 -- accel/accel.sh@129 -- # build_accel_config 00:07:42.973 03:40:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.973 03:40:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:42.973 03:40:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.973 03:40:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.973 03:40:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.973 03:40:01 -- common/autotest_common.sh@10 -- # set +x 00:07:42.973 03:40:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.973 03:40:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.973 03:40:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.973 03:40:01 -- accel/accel.sh@42 -- # jq -r . 00:07:42.973 ************************************ 00:07:42.973 START TEST accel_dif_functional_tests 00:07:42.973 ************************************ 00:07:42.973 03:40:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:42.973 [2024-07-14 03:40:01.710127] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:42.973 [2024-07-14 03:40:01.710216] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275849 ] 00:07:42.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.973 [2024-07-14 03:40:01.777038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.973 [2024-07-14 03:40:01.869554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.973 [2024-07-14 03:40:01.869626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.973 [2024-07-14 03:40:01.869628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.232 00:07:43.232 00:07:43.232 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.232 http://cunit.sourceforge.net/ 00:07:43.232 00:07:43.232 00:07:43.232 Suite: accel_dif 00:07:43.232 Test: verify: DIF generated, GUARD check ...passed 00:07:43.232 Test: verify: DIF generated, APPTAG check ...passed 00:07:43.232 Test: verify: DIF generated, REFTAG check ...passed 00:07:43.232 Test: verify: DIF not generated, GUARD check ...[2024-07-14 03:40:01.959638] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:43.232 [2024-07-14 03:40:01.959711] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:43.232 passed 00:07:43.232 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 03:40:01.959748] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:43.232 [2024-07-14 03:40:01.959774] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:43.232 passed 00:07:43.232 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 03:40:01.959803] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:43.232 [2024-07-14 03:40:01.959827] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:43.232 passed 00:07:43.232 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:43.232 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 03:40:01.959909] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:43.232 passed 00:07:43.232 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:43.232 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:43.232 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:43.232 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 03:40:01.960049] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:43.232 passed 00:07:43.232 Test: generate copy: DIF generated, GUARD check ...passed 00:07:43.232 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:43.232 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:43.232 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:43.232 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:43.232 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:43.232 Test: generate copy: iovecs-len validate ...[2024-07-14 03:40:01.960286] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:43.232 passed 00:07:43.232 Test: generate copy: buffer alignment validate ...passed 00:07:43.232 00:07:43.232 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.232 suites 1 1 n/a 0 0 00:07:43.232 tests 20 20 20 0 0 00:07:43.232 asserts 204 204 204 0 n/a 00:07:43.232 00:07:43.232 Elapsed time = 0.002 seconds 00:07:43.232 00:07:43.232 real 0m0.498s 00:07:43.232 user 0m0.760s 00:07:43.232 sys 0m0.182s 00:07:43.232 03:40:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.232 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.232 ************************************ 00:07:43.232 END TEST accel_dif_functional_tests 00:07:43.232 ************************************ 00:07:43.491 00:07:43.491 real 0m59.659s 00:07:43.491 user 1m7.443s 00:07:43.491 sys 0m7.143s 00:07:43.491 03:40:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.491 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.491 ************************************ 00:07:43.491 END TEST accel 00:07:43.491 ************************************ 00:07:43.491 03:40:02 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:43.491 03:40:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.491 03:40:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.491 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.491 ************************************ 00:07:43.491 START TEST accel_rpc 00:07:43.491 ************************************ 00:07:43.491 03:40:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:43.491 * Looking for test storage... 00:07:43.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:43.491 03:40:02 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:43.491 03:40:02 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2276027 00:07:43.491 03:40:02 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:43.491 03:40:02 -- accel/accel_rpc.sh@15 -- # waitforlisten 2276027 00:07:43.491 03:40:02 -- common/autotest_common.sh@819 -- # '[' -z 2276027 ']' 00:07:43.491 03:40:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.491 03:40:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.491 03:40:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.491 03:40:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.491 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.491 [2024-07-14 03:40:02.317721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:43.491 [2024-07-14 03:40:02.317816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276027 ] 00:07:43.491 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.491 [2024-07-14 03:40:02.377597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.751 [2024-07-14 03:40:02.465884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.751 [2024-07-14 03:40:02.466056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.751 03:40:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.751 03:40:02 -- common/autotest_common.sh@852 -- # return 0 00:07:43.751 03:40:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:43.751 03:40:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:43.751 03:40:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:43.751 03:40:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:43.751 03:40:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:43.751 03:40:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.751 03:40:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.751 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.751 ************************************ 00:07:43.751 START TEST accel_assign_opcode 00:07:43.751 ************************************ 00:07:43.751 03:40:02 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:43.751 03:40:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:43.751 03:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.751 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.751 [2024-07-14 03:40:02.514539] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:43.751 03:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.752 03:40:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:43.752 03:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.752 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 [2024-07-14 03:40:02.522548] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:43.752 03:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.752 03:40:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:43.752 03:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.752 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:44.013 03:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.013 03:40:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:44.013 03:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.013 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:44.013 03:40:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:44.013 03:40:02 -- accel/accel_rpc.sh@42 -- # grep software 00:07:44.013 03:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.013 software 00:07:44.013 00:07:44.013 real 0m0.292s 00:07:44.013 user 0m0.036s 00:07:44.013 sys 0m0.008s 00:07:44.013 03:40:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.013 03:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:44.013 ************************************ 00:07:44.013 END TEST accel_assign_opcode 00:07:44.013 ************************************ 00:07:44.013 03:40:02 -- accel/accel_rpc.sh@55 -- # killprocess 2276027 00:07:44.013 03:40:02 -- common/autotest_common.sh@926 -- # '[' -z 2276027 ']' 00:07:44.013 03:40:02 -- common/autotest_common.sh@930 -- # kill -0 2276027 00:07:44.013 03:40:02 -- common/autotest_common.sh@931 -- # uname 00:07:44.013 03:40:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:44.013 03:40:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2276027 00:07:44.013 03:40:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:44.013 03:40:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:44.013 03:40:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2276027' 00:07:44.013 killing process with pid 2276027 00:07:44.013 03:40:02 -- common/autotest_common.sh@945 -- # kill 2276027 00:07:44.013 03:40:02 -- common/autotest_common.sh@950 -- # wait 2276027 00:07:44.583 00:07:44.583 real 0m1.034s 00:07:44.583 user 0m0.927s 00:07:44.583 sys 0m0.429s 00:07:44.583 03:40:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.583 03:40:03 -- common/autotest_common.sh@10 -- # set +x 00:07:44.583 ************************************ 00:07:44.583 END TEST accel_rpc 00:07:44.583 ************************************ 00:07:44.583 03:40:03 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.583 03:40:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.583 03:40:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.583 03:40:03 -- common/autotest_common.sh@10 -- # set +x 00:07:44.583 ************************************ 00:07:44.583 START TEST app_cmdline 00:07:44.583 ************************************ 00:07:44.583 03:40:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:44.583 * Looking for test storage... 00:07:44.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:44.583 03:40:03 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:44.584 03:40:03 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2276230 00:07:44.584 03:40:03 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:44.584 03:40:03 -- app/cmdline.sh@18 -- # waitforlisten 2276230 00:07:44.584 03:40:03 -- common/autotest_common.sh@819 -- # '[' -z 2276230 ']' 00:07:44.584 03:40:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.584 03:40:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:44.584 03:40:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.584 03:40:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:44.584 03:40:03 -- common/autotest_common.sh@10 -- # set +x 00:07:44.584 [2024-07-14 03:40:03.382595] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:44.584 [2024-07-14 03:40:03.382692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276230 ] 00:07:44.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.584 [2024-07-14 03:40:03.440339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.584 [2024-07-14 03:40:03.523490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.843 [2024-07-14 03:40:03.523667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.414 03:40:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:45.414 03:40:04 -- common/autotest_common.sh@852 -- # return 0 00:07:45.414 03:40:04 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:45.672 { 00:07:45.672 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:45.672 "fields": { 00:07:45.672 "major": 24, 00:07:45.672 "minor": 1, 00:07:45.672 "patch": 1, 00:07:45.672 "suffix": "-pre", 00:07:45.672 "commit": "4b94202c6" 00:07:45.672 } 00:07:45.672 } 00:07:45.672 03:40:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:45.673 03:40:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:45.673 03:40:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:45.673 03:40:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:45.673 03:40:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:45.673 03:40:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:45.673 03:40:04 -- common/autotest_common.sh@10 -- # set +x 00:07:45.673 03:40:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:45.673 03:40:04 -- app/cmdline.sh@26 -- # sort 00:07:45.673 03:40:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:45.951 03:40:04 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:45.951 03:40:04 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:45.951 03:40:04 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.951 03:40:04 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.951 03:40:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.951 03:40:04 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.951 03:40:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.951 03:40:04 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.951 03:40:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.951 03:40:04 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.951 03:40:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.951 03:40:04 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.951 03:40:04 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.951 03:40:04 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:45.951 request: 00:07:45.951 { 00:07:45.951 "method": "env_dpdk_get_mem_stats", 00:07:45.951 "req_id": 1 00:07:45.951 } 00:07:45.951 Got JSON-RPC error response 00:07:45.951 response: 00:07:45.951 { 00:07:45.951 "code": -32601, 00:07:45.951 "message": "Method not found" 00:07:45.951 } 00:07:45.951 03:40:04 -- common/autotest_common.sh@643 -- # es=1 00:07:45.951 03:40:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.951 03:40:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.951 03:40:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.951 03:40:04 -- app/cmdline.sh@1 -- # killprocess 2276230 00:07:45.951 03:40:04 -- common/autotest_common.sh@926 -- # '[' -z 2276230 ']' 00:07:45.951 03:40:04 -- common/autotest_common.sh@930 -- # kill -0 2276230 00:07:45.951 03:40:04 -- common/autotest_common.sh@931 -- # uname 00:07:45.951 03:40:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:45.951 03:40:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2276230 00:07:46.211 03:40:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:46.211 03:40:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:46.211 03:40:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2276230' 00:07:46.211 killing process with pid 2276230 00:07:46.211 03:40:04 -- common/autotest_common.sh@945 -- # kill 2276230 00:07:46.211 03:40:04 -- common/autotest_common.sh@950 -- # wait 2276230 00:07:46.470 00:07:46.470 real 0m2.026s 00:07:46.470 user 0m2.567s 00:07:46.470 sys 0m0.491s 00:07:46.470 03:40:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.470 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.470 ************************************ 00:07:46.470 END TEST app_cmdline 00:07:46.470 ************************************ 00:07:46.470 03:40:05 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:46.470 03:40:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.470 03:40:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.470 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.470 ************************************ 00:07:46.470 START TEST version 00:07:46.470 ************************************ 00:07:46.470 03:40:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:46.470 * Looking for test storage... 00:07:46.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.470 03:40:05 -- app/version.sh@17 -- # get_header_version major 00:07:46.470 03:40:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.470 03:40:05 -- app/version.sh@14 -- # cut -f2 00:07:46.470 03:40:05 -- app/version.sh@14 -- # tr -d '"' 00:07:46.470 03:40:05 -- app/version.sh@17 -- # major=24 00:07:46.470 03:40:05 -- app/version.sh@18 -- # get_header_version minor 00:07:46.470 03:40:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.470 03:40:05 -- app/version.sh@14 -- # cut -f2 00:07:46.470 03:40:05 -- app/version.sh@14 -- # tr -d '"' 00:07:46.470 03:40:05 -- app/version.sh@18 -- # minor=1 00:07:46.470 03:40:05 -- app/version.sh@19 -- # get_header_version patch 00:07:46.470 03:40:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.470 03:40:05 -- app/version.sh@14 -- # cut -f2 00:07:46.470 03:40:05 -- app/version.sh@14 -- # tr -d '"' 00:07:46.470 03:40:05 -- app/version.sh@19 -- # patch=1 00:07:46.470 03:40:05 -- app/version.sh@20 -- # get_header_version suffix 00:07:46.470 03:40:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:46.470 03:40:05 -- app/version.sh@14 -- # cut -f2 00:07:46.470 03:40:05 -- app/version.sh@14 -- # tr -d '"' 00:07:46.470 03:40:05 -- app/version.sh@20 -- # suffix=-pre 00:07:46.470 03:40:05 -- app/version.sh@22 -- # version=24.1 00:07:46.470 03:40:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:46.470 03:40:05 -- app/version.sh@25 -- # version=24.1.1 00:07:46.470 03:40:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:46.470 03:40:05 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:46.470 03:40:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:46.729 03:40:05 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:46.729 03:40:05 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:46.729 00:07:46.729 real 0m0.102s 00:07:46.729 user 0m0.053s 00:07:46.729 sys 0m0.069s 00:07:46.729 03:40:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.729 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.729 ************************************ 00:07:46.729 END TEST version 00:07:46.729 ************************************ 00:07:46.729 03:40:05 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@204 -- # uname -s 00:07:46.729 03:40:05 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:46.729 03:40:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:46.729 03:40:05 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:46.729 03:40:05 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:46.729 03:40:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:46.729 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.729 03:40:05 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:46.729 03:40:05 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:46.729 03:40:05 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.729 03:40:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:46.729 03:40:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.729 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.729 ************************************ 00:07:46.729 START TEST nvmf_tcp 00:07:46.729 ************************************ 00:07:46.729 03:40:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.729 * Looking for test storage... 00:07:46.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:46.729 03:40:05 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:46.729 03:40:05 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.729 03:40:05 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.729 03:40:05 -- nvmf/common.sh@7 -- # uname -s 00:07:46.729 03:40:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.729 03:40:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.729 03:40:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.729 03:40:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.729 03:40:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.729 03:40:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.729 03:40:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.729 03:40:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.729 03:40:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.729 03:40:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.729 03:40:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.729 03:40:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.729 03:40:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.730 03:40:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.730 03:40:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.730 03:40:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.730 03:40:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.730 03:40:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.730 03:40:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.730 03:40:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- paths/export.sh@5 -- # export PATH 00:07:46.730 03:40:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- nvmf/common.sh@46 -- # : 0 00:07:46.730 03:40:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:46.730 03:40:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:46.730 03:40:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:46.730 03:40:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.730 03:40:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.730 03:40:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:46.730 03:40:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:46.730 03:40:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:46.730 03:40:05 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:46.730 03:40:05 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:46.730 03:40:05 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:46.730 03:40:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:46.730 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.730 03:40:05 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:46.730 03:40:05 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:46.730 03:40:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:46.730 03:40:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.730 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.730 ************************************ 00:07:46.730 START TEST nvmf_example 00:07:46.730 ************************************ 00:07:46.730 03:40:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:46.730 * Looking for test storage... 00:07:46.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.730 03:40:05 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.730 03:40:05 -- nvmf/common.sh@7 -- # uname -s 00:07:46.730 03:40:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.730 03:40:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.730 03:40:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.730 03:40:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.730 03:40:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.730 03:40:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.730 03:40:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.730 03:40:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.730 03:40:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.730 03:40:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.730 03:40:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.730 03:40:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:46.730 03:40:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.730 03:40:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.730 03:40:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.730 03:40:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.730 03:40:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.730 03:40:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.730 03:40:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.730 03:40:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- paths/export.sh@5 -- # export PATH 00:07:46.730 03:40:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.730 03:40:05 -- nvmf/common.sh@46 -- # : 0 00:07:46.730 03:40:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:46.730 03:40:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:46.730 03:40:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:46.730 03:40:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.730 03:40:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.730 03:40:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:46.730 03:40:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:46.730 03:40:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:46.730 03:40:05 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:46.730 03:40:05 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:46.730 03:40:05 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:46.730 03:40:05 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:46.730 03:40:05 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:46.730 03:40:05 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:46.730 03:40:05 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:46.730 03:40:05 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:46.730 03:40:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:46.730 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.730 03:40:05 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:46.730 03:40:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:46.730 03:40:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.730 03:40:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:46.730 03:40:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:46.730 03:40:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:46.730 03:40:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.730 03:40:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.730 03:40:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.730 03:40:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:46.730 03:40:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:46.730 03:40:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:46.730 03:40:05 -- common/autotest_common.sh@10 -- # set +x 00:07:49.269 03:40:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:49.270 03:40:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:49.270 03:40:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:49.270 03:40:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:49.270 03:40:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:49.270 03:40:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:49.270 03:40:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:49.270 03:40:07 -- nvmf/common.sh@294 -- # net_devs=() 00:07:49.270 03:40:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:49.270 03:40:07 -- nvmf/common.sh@295 -- # e810=() 00:07:49.270 03:40:07 -- nvmf/common.sh@295 -- # local -ga e810 00:07:49.270 03:40:07 -- nvmf/common.sh@296 -- # x722=() 00:07:49.270 03:40:07 -- nvmf/common.sh@296 -- # local -ga x722 00:07:49.270 03:40:07 -- nvmf/common.sh@297 -- # mlx=() 00:07:49.270 03:40:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:49.270 03:40:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.270 03:40:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:49.270 03:40:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:49.270 03:40:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:49.270 03:40:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:49.270 03:40:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:49.270 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:49.270 03:40:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:49.270 03:40:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:49.270 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:49.270 03:40:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:49.270 03:40:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:49.270 03:40:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.270 03:40:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:49.270 03:40:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.270 03:40:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:49.270 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:49.270 03:40:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.270 03:40:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:49.270 03:40:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.270 03:40:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:49.270 03:40:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.270 03:40:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:49.270 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:49.270 03:40:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.270 03:40:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:49.270 03:40:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:49.270 03:40:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:49.270 03:40:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.270 03:40:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.270 03:40:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.270 03:40:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:49.270 03:40:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.270 03:40:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.270 03:40:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:49.270 03:40:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.270 03:40:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.270 03:40:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:49.270 03:40:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:49.270 03:40:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.270 03:40:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.270 03:40:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.270 03:40:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.270 03:40:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:49.270 03:40:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.270 03:40:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.270 03:40:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.270 03:40:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:49.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:07:49.270 00:07:49.270 --- 10.0.0.2 ping statistics --- 00:07:49.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.270 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:49.270 03:40:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:49.270 00:07:49.270 --- 10.0.0.1 ping statistics --- 00:07:49.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.270 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:49.270 03:40:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.270 03:40:07 -- nvmf/common.sh@410 -- # return 0 00:07:49.270 03:40:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:49.270 03:40:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.270 03:40:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:49.270 03:40:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.270 03:40:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:49.270 03:40:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:49.270 03:40:07 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:49.270 03:40:07 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:49.270 03:40:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:49.270 03:40:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.270 03:40:07 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:49.270 03:40:07 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:49.270 03:40:07 -- target/nvmf_example.sh@34 -- # nvmfpid=2278269 00:07:49.270 03:40:07 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:49.270 03:40:07 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.270 03:40:07 -- target/nvmf_example.sh@36 -- # waitforlisten 2278269 00:07:49.270 03:40:07 -- common/autotest_common.sh@819 -- # '[' -z 2278269 ']' 00:07:49.270 03:40:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.270 03:40:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.270 03:40:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.270 03:40:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.270 03:40:07 -- common/autotest_common.sh@10 -- # set +x 00:07:49.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.840 03:40:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:49.840 03:40:08 -- common/autotest_common.sh@852 -- # return 0 00:07:49.840 03:40:08 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:49.840 03:40:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:49.840 03:40:08 -- common/autotest_common.sh@10 -- # set +x 00:07:49.840 03:40:08 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.840 03:40:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.840 03:40:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.130 03:40:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.130 03:40:08 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:50.130 03:40:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.130 03:40:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.130 03:40:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.130 03:40:08 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:50.130 03:40:08 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.130 03:40:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.130 03:40:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.130 03:40:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.130 03:40:08 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:50.130 03:40:08 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.130 03:40:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.130 03:40:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.130 03:40:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.130 03:40:08 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.130 03:40:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.130 03:40:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.130 03:40:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.130 03:40:08 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:50.130 03:40:08 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:50.131 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.113 Initializing NVMe Controllers 00:08:00.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:00.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:00.113 Initialization complete. Launching workers. 00:08:00.113 ======================================================== 00:08:00.113 Latency(us) 00:08:00.113 Device Information : IOPS MiB/s Average min max 00:08:00.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15073.50 58.88 4246.46 866.39 18061.90 00:08:00.113 ======================================================== 00:08:00.113 Total : 15073.50 58.88 4246.46 866.39 18061.90 00:08:00.113 00:08:00.113 03:40:18 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:00.113 03:40:18 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:00.113 03:40:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:00.113 03:40:18 -- nvmf/common.sh@116 -- # sync 00:08:00.113 03:40:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:00.113 03:40:18 -- nvmf/common.sh@119 -- # set +e 00:08:00.113 03:40:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:00.113 03:40:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:00.113 rmmod nvme_tcp 00:08:00.113 rmmod nvme_fabrics 00:08:00.113 rmmod nvme_keyring 00:08:00.113 03:40:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:00.113 03:40:19 -- nvmf/common.sh@123 -- # set -e 00:08:00.113 03:40:19 -- nvmf/common.sh@124 -- # return 0 00:08:00.113 03:40:19 -- nvmf/common.sh@477 -- # '[' -n 2278269 ']' 00:08:00.113 03:40:19 -- nvmf/common.sh@478 -- # killprocess 2278269 00:08:00.113 03:40:19 -- common/autotest_common.sh@926 -- # '[' -z 2278269 ']' 00:08:00.113 03:40:19 -- common/autotest_common.sh@930 -- # kill -0 2278269 00:08:00.113 03:40:19 -- common/autotest_common.sh@931 -- # uname 00:08:00.113 03:40:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:00.113 03:40:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2278269 00:08:00.372 03:40:19 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:00.372 03:40:19 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:00.372 03:40:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2278269' 00:08:00.372 killing process with pid 2278269 00:08:00.372 03:40:19 -- common/autotest_common.sh@945 -- # kill 2278269 00:08:00.372 03:40:19 -- common/autotest_common.sh@950 -- # wait 2278269 00:08:00.372 nvmf threads initialize successfully 00:08:00.372 bdev subsystem init successfully 00:08:00.372 created a nvmf target service 00:08:00.372 create targets's poll groups done 00:08:00.372 all subsystems of target started 00:08:00.372 nvmf target is running 00:08:00.372 all subsystems of target stopped 00:08:00.372 destroy targets's poll groups done 00:08:00.372 destroyed the nvmf target service 00:08:00.372 bdev subsystem finish successfully 00:08:00.372 nvmf threads destroy successfully 00:08:00.372 03:40:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:00.372 03:40:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:00.372 03:40:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:00.372 03:40:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.372 03:40:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:00.372 03:40:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.372 03:40:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.372 03:40:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.910 03:40:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:02.910 03:40:21 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:02.910 03:40:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:02.910 03:40:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 00:08:02.910 real 0m15.795s 00:08:02.910 user 0m44.801s 00:08:02.910 sys 0m3.136s 00:08:02.910 03:40:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.910 03:40:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 ************************************ 00:08:02.910 END TEST nvmf_example 00:08:02.910 ************************************ 00:08:02.910 03:40:21 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:02.910 03:40:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:02.910 03:40:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.910 03:40:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 ************************************ 00:08:02.910 START TEST nvmf_filesystem 00:08:02.910 ************************************ 00:08:02.910 03:40:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:02.910 * Looking for test storage... 00:08:02.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.910 03:40:21 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:02.910 03:40:21 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:02.910 03:40:21 -- common/autotest_common.sh@34 -- # set -e 00:08:02.910 03:40:21 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:02.910 03:40:21 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:02.910 03:40:21 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:02.910 03:40:21 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:02.910 03:40:21 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:02.910 03:40:21 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:02.910 03:40:21 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:02.910 03:40:21 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:02.910 03:40:21 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:02.910 03:40:21 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:02.910 03:40:21 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:02.910 03:40:21 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:02.910 03:40:21 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:02.910 03:40:21 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:02.910 03:40:21 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:02.910 03:40:21 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:02.910 03:40:21 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:02.910 03:40:21 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:02.910 03:40:21 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:02.910 03:40:21 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:02.910 03:40:21 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:02.910 03:40:21 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:02.910 03:40:21 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:02.910 03:40:21 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:02.910 03:40:21 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:02.910 03:40:21 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:02.910 03:40:21 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:02.910 03:40:21 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:02.910 03:40:21 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:02.911 03:40:21 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:02.911 03:40:21 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:02.911 03:40:21 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:02.911 03:40:21 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:02.911 03:40:21 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:02.911 03:40:21 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:02.911 03:40:21 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:02.911 03:40:21 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:02.911 03:40:21 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:02.911 03:40:21 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:02.911 03:40:21 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:02.911 03:40:21 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:02.911 03:40:21 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:02.911 03:40:21 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:02.911 03:40:21 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:02.911 03:40:21 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:02.911 03:40:21 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:02.911 03:40:21 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:02.911 03:40:21 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:02.911 03:40:21 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:02.911 03:40:21 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:02.911 03:40:21 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:02.911 03:40:21 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:02.911 03:40:21 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:02.911 03:40:21 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:02.911 03:40:21 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:02.911 03:40:21 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:02.911 03:40:21 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:02.911 03:40:21 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:02.911 03:40:21 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:02.911 03:40:21 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:02.911 03:40:21 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:02.911 03:40:21 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:02.911 03:40:21 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:02.911 03:40:21 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:02.911 03:40:21 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.911 03:40:21 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:02.911 03:40:21 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:02.911 03:40:21 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:02.911 03:40:21 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:02.911 03:40:21 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:02.911 03:40:21 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:02.911 03:40:21 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:02.911 03:40:21 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:02.911 03:40:21 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:02.911 03:40:21 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:02.911 03:40:21 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:02.911 03:40:21 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:02.911 03:40:21 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:02.911 03:40:21 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:02.911 03:40:21 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:02.911 03:40:21 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:02.911 03:40:21 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:02.911 03:40:21 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:02.911 03:40:21 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:02.911 03:40:21 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:02.911 03:40:21 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:02.911 03:40:21 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:02.911 03:40:21 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:02.911 03:40:21 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:02.911 03:40:21 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:02.911 03:40:21 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:02.911 03:40:21 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:02.911 03:40:21 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:02.911 03:40:21 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:02.911 03:40:21 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:02.911 03:40:21 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:02.911 03:40:21 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:02.911 03:40:21 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:02.911 03:40:21 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:02.911 #define SPDK_CONFIG_H 00:08:02.911 #define SPDK_CONFIG_APPS 1 00:08:02.911 #define SPDK_CONFIG_ARCH native 00:08:02.911 #undef SPDK_CONFIG_ASAN 00:08:02.911 #undef SPDK_CONFIG_AVAHI 00:08:02.911 #undef SPDK_CONFIG_CET 00:08:02.911 #define SPDK_CONFIG_COVERAGE 1 00:08:02.911 #define SPDK_CONFIG_CROSS_PREFIX 00:08:02.911 #undef SPDK_CONFIG_CRYPTO 00:08:02.911 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:02.911 #undef SPDK_CONFIG_CUSTOMOCF 00:08:02.911 #undef SPDK_CONFIG_DAOS 00:08:02.911 #define SPDK_CONFIG_DAOS_DIR 00:08:02.911 #define SPDK_CONFIG_DEBUG 1 00:08:02.911 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:02.911 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:02.911 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:02.911 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.911 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:02.911 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:02.911 #define SPDK_CONFIG_EXAMPLES 1 00:08:02.911 #undef SPDK_CONFIG_FC 00:08:02.911 #define SPDK_CONFIG_FC_PATH 00:08:02.911 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:02.911 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:02.911 #undef SPDK_CONFIG_FUSE 00:08:02.911 #undef SPDK_CONFIG_FUZZER 00:08:02.911 #define SPDK_CONFIG_FUZZER_LIB 00:08:02.911 #undef SPDK_CONFIG_GOLANG 00:08:02.911 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:02.911 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:02.911 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:02.911 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:02.911 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:02.911 #define SPDK_CONFIG_IDXD 1 00:08:02.911 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:02.911 #undef SPDK_CONFIG_IPSEC_MB 00:08:02.911 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:02.911 #define SPDK_CONFIG_ISAL 1 00:08:02.911 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:02.911 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:02.911 #define SPDK_CONFIG_LIBDIR 00:08:02.911 #undef SPDK_CONFIG_LTO 00:08:02.911 #define SPDK_CONFIG_MAX_LCORES 00:08:02.911 #define SPDK_CONFIG_NVME_CUSE 1 00:08:02.911 #undef SPDK_CONFIG_OCF 00:08:02.911 #define SPDK_CONFIG_OCF_PATH 00:08:02.911 #define SPDK_CONFIG_OPENSSL_PATH 00:08:02.911 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:02.911 #undef SPDK_CONFIG_PGO_USE 00:08:02.911 #define SPDK_CONFIG_PREFIX /usr/local 00:08:02.911 #undef SPDK_CONFIG_RAID5F 00:08:02.911 #undef SPDK_CONFIG_RBD 00:08:02.911 #define SPDK_CONFIG_RDMA 1 00:08:02.911 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:02.911 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:02.911 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:02.911 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:02.911 #define SPDK_CONFIG_SHARED 1 00:08:02.911 #undef SPDK_CONFIG_SMA 00:08:02.911 #define SPDK_CONFIG_TESTS 1 00:08:02.911 #undef SPDK_CONFIG_TSAN 00:08:02.911 #define SPDK_CONFIG_UBLK 1 00:08:02.911 #define SPDK_CONFIG_UBSAN 1 00:08:02.911 #undef SPDK_CONFIG_UNIT_TESTS 00:08:02.911 #undef SPDK_CONFIG_URING 00:08:02.911 #define SPDK_CONFIG_URING_PATH 00:08:02.911 #undef SPDK_CONFIG_URING_ZNS 00:08:02.911 #undef SPDK_CONFIG_USDT 00:08:02.911 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:02.911 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:02.911 #define SPDK_CONFIG_VFIO_USER 1 00:08:02.911 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:02.911 #define SPDK_CONFIG_VHOST 1 00:08:02.911 #define SPDK_CONFIG_VIRTIO 1 00:08:02.911 #undef SPDK_CONFIG_VTUNE 00:08:02.911 #define SPDK_CONFIG_VTUNE_DIR 00:08:02.911 #define SPDK_CONFIG_WERROR 1 00:08:02.911 #define SPDK_CONFIG_WPDK_DIR 00:08:02.911 #undef SPDK_CONFIG_XNVME 00:08:02.911 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:02.911 03:40:21 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:02.911 03:40:21 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.911 03:40:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.911 03:40:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.911 03:40:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.911 03:40:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.911 03:40:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.911 03:40:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.911 03:40:21 -- paths/export.sh@5 -- # export PATH 00:08:02.912 03:40:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.912 03:40:21 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:02.912 03:40:21 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:02.912 03:40:21 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:02.912 03:40:21 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:02.912 03:40:21 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:02.912 03:40:21 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:02.912 03:40:21 -- pm/common@16 -- # TEST_TAG=N/A 00:08:02.912 03:40:21 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:02.912 03:40:21 -- common/autotest_common.sh@52 -- # : 1 00:08:02.912 03:40:21 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:02.912 03:40:21 -- common/autotest_common.sh@56 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:02.912 03:40:21 -- common/autotest_common.sh@58 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:02.912 03:40:21 -- common/autotest_common.sh@60 -- # : 1 00:08:02.912 03:40:21 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:02.912 03:40:21 -- common/autotest_common.sh@62 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:02.912 03:40:21 -- common/autotest_common.sh@64 -- # : 00:08:02.912 03:40:21 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:02.912 03:40:21 -- common/autotest_common.sh@66 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:02.912 03:40:21 -- common/autotest_common.sh@68 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:02.912 03:40:21 -- common/autotest_common.sh@70 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:02.912 03:40:21 -- common/autotest_common.sh@72 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:02.912 03:40:21 -- common/autotest_common.sh@74 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:02.912 03:40:21 -- common/autotest_common.sh@76 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:02.912 03:40:21 -- common/autotest_common.sh@78 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:02.912 03:40:21 -- common/autotest_common.sh@80 -- # : 1 00:08:02.912 03:40:21 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:02.912 03:40:21 -- common/autotest_common.sh@82 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:02.912 03:40:21 -- common/autotest_common.sh@84 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:02.912 03:40:21 -- common/autotest_common.sh@86 -- # : 1 00:08:02.912 03:40:21 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:02.912 03:40:21 -- common/autotest_common.sh@88 -- # : 1 00:08:02.912 03:40:21 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:02.912 03:40:21 -- common/autotest_common.sh@90 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:02.912 03:40:21 -- common/autotest_common.sh@92 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:02.912 03:40:21 -- common/autotest_common.sh@94 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:02.912 03:40:21 -- common/autotest_common.sh@96 -- # : tcp 00:08:02.912 03:40:21 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:02.912 03:40:21 -- common/autotest_common.sh@98 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:02.912 03:40:21 -- common/autotest_common.sh@100 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:02.912 03:40:21 -- common/autotest_common.sh@102 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:02.912 03:40:21 -- common/autotest_common.sh@104 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:02.912 03:40:21 -- common/autotest_common.sh@106 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:02.912 03:40:21 -- common/autotest_common.sh@108 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:02.912 03:40:21 -- common/autotest_common.sh@110 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:02.912 03:40:21 -- common/autotest_common.sh@112 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:02.912 03:40:21 -- common/autotest_common.sh@114 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:02.912 03:40:21 -- common/autotest_common.sh@116 -- # : 1 00:08:02.912 03:40:21 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:02.912 03:40:21 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:02.912 03:40:21 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:02.912 03:40:21 -- common/autotest_common.sh@120 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:02.912 03:40:21 -- common/autotest_common.sh@122 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:02.912 03:40:21 -- common/autotest_common.sh@124 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:02.912 03:40:21 -- common/autotest_common.sh@126 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:02.912 03:40:21 -- common/autotest_common.sh@128 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:02.912 03:40:21 -- common/autotest_common.sh@130 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:02.912 03:40:21 -- common/autotest_common.sh@132 -- # : v23.11 00:08:02.912 03:40:21 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:02.912 03:40:21 -- common/autotest_common.sh@134 -- # : true 00:08:02.912 03:40:21 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:02.912 03:40:21 -- common/autotest_common.sh@136 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:02.912 03:40:21 -- common/autotest_common.sh@138 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:02.912 03:40:21 -- common/autotest_common.sh@140 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:02.912 03:40:21 -- common/autotest_common.sh@142 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:02.912 03:40:21 -- common/autotest_common.sh@144 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:02.912 03:40:21 -- common/autotest_common.sh@146 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:02.912 03:40:21 -- common/autotest_common.sh@148 -- # : e810 00:08:02.912 03:40:21 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:02.912 03:40:21 -- common/autotest_common.sh@150 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:02.912 03:40:21 -- common/autotest_common.sh@152 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:02.912 03:40:21 -- common/autotest_common.sh@154 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:02.912 03:40:21 -- common/autotest_common.sh@156 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:02.912 03:40:21 -- common/autotest_common.sh@158 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:02.912 03:40:21 -- common/autotest_common.sh@160 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:02.912 03:40:21 -- common/autotest_common.sh@163 -- # : 00:08:02.912 03:40:21 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:02.912 03:40:21 -- common/autotest_common.sh@165 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:02.912 03:40:21 -- common/autotest_common.sh@167 -- # : 0 00:08:02.912 03:40:21 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:02.912 03:40:21 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:02.912 03:40:21 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:02.912 03:40:21 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:02.912 03:40:21 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:02.913 03:40:21 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:02.913 03:40:21 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:02.913 03:40:21 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:02.913 03:40:21 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:02.913 03:40:21 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:02.913 03:40:21 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:02.913 03:40:21 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:02.913 03:40:21 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:02.913 03:40:21 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:02.913 03:40:21 -- common/autotest_common.sh@196 -- # cat 00:08:02.913 03:40:21 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:02.913 03:40:21 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:02.913 03:40:21 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:02.913 03:40:21 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:02.913 03:40:21 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:02.913 03:40:21 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:02.913 03:40:21 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:02.913 03:40:21 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:02.913 03:40:21 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:02.913 03:40:21 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:02.913 03:40:21 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:02.913 03:40:21 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:02.913 03:40:21 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:02.913 03:40:21 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:02.913 03:40:21 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:02.913 03:40:21 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:02.913 03:40:21 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:02.913 03:40:21 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:02.913 03:40:21 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:02.913 03:40:21 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:02.913 03:40:21 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:02.913 03:40:21 -- common/autotest_common.sh@249 -- # valgrind= 00:08:02.913 03:40:21 -- common/autotest_common.sh@255 -- # uname -s 00:08:02.913 03:40:21 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:02.913 03:40:21 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:02.913 03:40:21 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:02.913 03:40:21 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:02.913 03:40:21 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:02.913 03:40:21 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:08:02.913 03:40:21 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:02.913 03:40:21 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:02.913 03:40:21 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:02.913 03:40:21 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:02.913 03:40:21 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:02.913 03:40:21 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:02.913 03:40:21 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:02.913 03:40:21 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:02.913 03:40:21 -- common/autotest_common.sh@309 -- # [[ -z 2280022 ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@309 -- # kill -0 2280022 00:08:02.913 03:40:21 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:02.913 03:40:21 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:02.913 03:40:21 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:02.913 03:40:21 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:02.913 03:40:21 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:02.913 03:40:21 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:02.913 03:40:21 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:02.913 03:40:21 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.PU9K9M 00:08:02.913 03:40:21 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:02.913 03:40:21 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.PU9K9M/tests/target /tmp/spdk.PU9K9M 00:08:02.913 03:40:21 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@318 -- # df -T 00:08:02.913 03:40:21 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:02.913 03:40:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:02.913 03:40:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=52975026176 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994708992 00:08:02.913 03:40:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=9019682816 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943834112 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997352448 00:08:02.913 03:40:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:08:02.913 03:40:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996172800 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:08:02.913 03:40:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=1183744 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199463936 00:08:02.913 03:40:21 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199468032 00:08:02.913 03:40:21 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:02.913 03:40:21 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:02.913 03:40:21 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:02.913 * Looking for test storage... 00:08:02.913 03:40:21 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:02.913 03:40:21 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:02.913 03:40:21 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.913 03:40:21 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:02.913 03:40:21 -- common/autotest_common.sh@363 -- # mount=/ 00:08:02.913 03:40:21 -- common/autotest_common.sh@365 -- # target_space=52975026176 00:08:02.913 03:40:21 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:02.913 03:40:21 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:02.913 03:40:21 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:02.913 03:40:21 -- common/autotest_common.sh@372 -- # new_size=11234275328 00:08:02.913 03:40:21 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:02.913 03:40:21 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.913 03:40:21 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.913 03:40:21 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.913 03:40:21 -- common/autotest_common.sh@380 -- # return 0 00:08:02.913 03:40:21 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:02.913 03:40:21 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:02.913 03:40:21 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:02.913 03:40:21 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:02.913 03:40:21 -- common/autotest_common.sh@1672 -- # true 00:08:02.914 03:40:21 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:02.914 03:40:21 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:02.914 03:40:21 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:02.914 03:40:21 -- common/autotest_common.sh@27 -- # exec 00:08:02.914 03:40:21 -- common/autotest_common.sh@29 -- # exec 00:08:02.914 03:40:21 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:02.914 03:40:21 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:02.914 03:40:21 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:02.914 03:40:21 -- common/autotest_common.sh@18 -- # set -x 00:08:02.914 03:40:21 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.914 03:40:21 -- nvmf/common.sh@7 -- # uname -s 00:08:02.914 03:40:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.914 03:40:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.914 03:40:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.914 03:40:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.914 03:40:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.914 03:40:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.914 03:40:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.914 03:40:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.914 03:40:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.914 03:40:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.914 03:40:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.914 03:40:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:02.914 03:40:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.914 03:40:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.914 03:40:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.914 03:40:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.914 03:40:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.914 03:40:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.914 03:40:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.914 03:40:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.914 03:40:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.914 03:40:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.914 03:40:21 -- paths/export.sh@5 -- # export PATH 00:08:02.914 03:40:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.914 03:40:21 -- nvmf/common.sh@46 -- # : 0 00:08:02.914 03:40:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:02.914 03:40:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:02.914 03:40:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:02.914 03:40:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.914 03:40:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.914 03:40:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:02.914 03:40:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:02.914 03:40:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:02.914 03:40:21 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:02.914 03:40:21 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:02.914 03:40:21 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:02.914 03:40:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:02.914 03:40:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.914 03:40:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:02.914 03:40:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:02.914 03:40:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:02.914 03:40:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.914 03:40:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.914 03:40:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.914 03:40:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:02.914 03:40:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:02.914 03:40:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:02.914 03:40:21 -- common/autotest_common.sh@10 -- # set +x 00:08:04.821 03:40:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.821 03:40:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:04.821 03:40:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:04.821 03:40:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:04.821 03:40:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:04.821 03:40:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:04.821 03:40:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:04.821 03:40:23 -- nvmf/common.sh@294 -- # net_devs=() 00:08:04.821 03:40:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:04.821 03:40:23 -- nvmf/common.sh@295 -- # e810=() 00:08:04.821 03:40:23 -- nvmf/common.sh@295 -- # local -ga e810 00:08:04.821 03:40:23 -- nvmf/common.sh@296 -- # x722=() 00:08:04.821 03:40:23 -- nvmf/common.sh@296 -- # local -ga x722 00:08:04.821 03:40:23 -- nvmf/common.sh@297 -- # mlx=() 00:08:04.822 03:40:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:04.822 03:40:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.822 03:40:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:04.822 03:40:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:04.822 03:40:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:04.822 03:40:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.822 03:40:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:04.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:04.822 03:40:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.822 03:40:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:04.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:04.822 03:40:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:04.822 03:40:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.822 03:40:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.822 03:40:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.822 03:40:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.822 03:40:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:04.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:04.822 03:40:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.822 03:40:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.822 03:40:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.822 03:40:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.822 03:40:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.822 03:40:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:04.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:04.822 03:40:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.822 03:40:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:04.822 03:40:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:04.822 03:40:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:04.822 03:40:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.822 03:40:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.822 03:40:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.822 03:40:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:04.822 03:40:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.822 03:40:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.822 03:40:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:04.822 03:40:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.822 03:40:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.822 03:40:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:04.822 03:40:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:04.822 03:40:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.822 03:40:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.822 03:40:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.822 03:40:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.822 03:40:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:04.822 03:40:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.822 03:40:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.822 03:40:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.822 03:40:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:04.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:08:04.822 00:08:04.822 --- 10.0.0.2 ping statistics --- 00:08:04.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.822 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:08:04.822 03:40:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:04.822 00:08:04.822 --- 10.0.0.1 ping statistics --- 00:08:04.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.822 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:04.822 03:40:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.822 03:40:23 -- nvmf/common.sh@410 -- # return 0 00:08:04.822 03:40:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:04.822 03:40:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.822 03:40:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:04.822 03:40:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.822 03:40:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:04.822 03:40:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:04.822 03:40:23 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:04.822 03:40:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:04.822 03:40:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.822 03:40:23 -- common/autotest_common.sh@10 -- # set +x 00:08:04.822 ************************************ 00:08:04.822 START TEST nvmf_filesystem_no_in_capsule 00:08:04.822 ************************************ 00:08:04.822 03:40:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:04.822 03:40:23 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:04.822 03:40:23 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:04.822 03:40:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:04.822 03:40:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:04.822 03:40:23 -- common/autotest_common.sh@10 -- # set +x 00:08:04.822 03:40:23 -- nvmf/common.sh@469 -- # nvmfpid=2281648 00:08:04.822 03:40:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.822 03:40:23 -- nvmf/common.sh@470 -- # waitforlisten 2281648 00:08:04.822 03:40:23 -- common/autotest_common.sh@819 -- # '[' -z 2281648 ']' 00:08:04.822 03:40:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.822 03:40:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:04.822 03:40:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.822 03:40:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:04.822 03:40:23 -- common/autotest_common.sh@10 -- # set +x 00:08:04.822 [2024-07-14 03:40:23.735415] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:04.822 [2024-07-14 03:40:23.735509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.081 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.081 [2024-07-14 03:40:23.802875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.081 [2024-07-14 03:40:23.893355] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.082 [2024-07-14 03:40:23.893531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.082 [2024-07-14 03:40:23.893548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.082 [2024-07-14 03:40:23.893560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.082 [2024-07-14 03:40:23.893622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.082 [2024-07-14 03:40:23.893653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.082 [2024-07-14 03:40:23.893681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.082 [2024-07-14 03:40:23.893683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.017 03:40:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:06.017 03:40:24 -- common/autotest_common.sh@852 -- # return 0 00:08:06.017 03:40:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:06.017 03:40:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:06.017 03:40:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 03:40:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.017 03:40:24 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:06.017 03:40:24 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:06.017 03:40:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.017 03:40:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 [2024-07-14 03:40:24.742571] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.017 03:40:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.017 03:40:24 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:06.017 03:40:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.017 03:40:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 Malloc1 00:08:06.017 03:40:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.017 03:40:24 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.017 03:40:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.017 03:40:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 03:40:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.017 03:40:24 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:06.017 03:40:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.017 03:40:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 03:40:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.017 03:40:24 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.017 03:40:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.017 03:40:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 [2024-07-14 03:40:24.921959] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.017 03:40:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.017 03:40:24 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:06.017 03:40:24 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:06.017 03:40:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:06.017 03:40:24 -- common/autotest_common.sh@1359 -- # local bs 00:08:06.017 03:40:24 -- common/autotest_common.sh@1360 -- # local nb 00:08:06.017 03:40:24 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:06.017 03:40:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.017 03:40:24 -- common/autotest_common.sh@10 -- # set +x 00:08:06.017 03:40:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.017 03:40:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:06.017 { 00:08:06.017 "name": "Malloc1", 00:08:06.017 "aliases": [ 00:08:06.017 "0b74da1d-b593-4683-9c3d-b3e846cd8339" 00:08:06.017 ], 00:08:06.017 "product_name": "Malloc disk", 00:08:06.017 "block_size": 512, 00:08:06.017 "num_blocks": 1048576, 00:08:06.017 "uuid": "0b74da1d-b593-4683-9c3d-b3e846cd8339", 00:08:06.017 "assigned_rate_limits": { 00:08:06.017 "rw_ios_per_sec": 0, 00:08:06.018 "rw_mbytes_per_sec": 0, 00:08:06.018 "r_mbytes_per_sec": 0, 00:08:06.018 "w_mbytes_per_sec": 0 00:08:06.018 }, 00:08:06.018 "claimed": true, 00:08:06.018 "claim_type": "exclusive_write", 00:08:06.018 "zoned": false, 00:08:06.018 "supported_io_types": { 00:08:06.018 "read": true, 00:08:06.018 "write": true, 00:08:06.018 "unmap": true, 00:08:06.018 "write_zeroes": true, 00:08:06.018 "flush": true, 00:08:06.018 "reset": true, 00:08:06.018 "compare": false, 00:08:06.018 "compare_and_write": false, 00:08:06.018 "abort": true, 00:08:06.018 "nvme_admin": false, 00:08:06.018 "nvme_io": false 00:08:06.018 }, 00:08:06.018 "memory_domains": [ 00:08:06.018 { 00:08:06.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.018 "dma_device_type": 2 00:08:06.018 } 00:08:06.018 ], 00:08:06.018 "driver_specific": {} 00:08:06.018 } 00:08:06.018 ]' 00:08:06.018 03:40:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:06.276 03:40:24 -- common/autotest_common.sh@1362 -- # bs=512 00:08:06.276 03:40:24 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:06.276 03:40:25 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:06.276 03:40:25 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:06.276 03:40:25 -- common/autotest_common.sh@1367 -- # echo 512 00:08:06.276 03:40:25 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:06.276 03:40:25 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.843 03:40:25 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.843 03:40:25 -- common/autotest_common.sh@1177 -- # local i=0 00:08:06.843 03:40:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.843 03:40:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:06.843 03:40:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:09.385 03:40:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:09.385 03:40:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:09.385 03:40:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.385 03:40:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:09.385 03:40:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.385 03:40:27 -- common/autotest_common.sh@1187 -- # return 0 00:08:09.385 03:40:27 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:09.385 03:40:27 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:09.385 03:40:27 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:09.385 03:40:27 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:09.385 03:40:27 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:09.385 03:40:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:09.385 03:40:27 -- setup/common.sh@80 -- # echo 536870912 00:08:09.385 03:40:27 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:09.385 03:40:27 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:09.385 03:40:27 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:09.385 03:40:27 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:09.385 03:40:28 -- target/filesystem.sh@69 -- # partprobe 00:08:09.955 03:40:28 -- target/filesystem.sh@70 -- # sleep 1 00:08:10.896 03:40:29 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:10.896 03:40:29 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:10.896 03:40:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:10.896 03:40:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.896 03:40:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.896 ************************************ 00:08:10.896 START TEST filesystem_ext4 00:08:10.896 ************************************ 00:08:10.896 03:40:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:10.896 03:40:29 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:10.896 03:40:29 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.896 03:40:29 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:10.896 03:40:29 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:10.896 03:40:29 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:10.896 03:40:29 -- common/autotest_common.sh@904 -- # local i=0 00:08:10.896 03:40:29 -- common/autotest_common.sh@905 -- # local force 00:08:10.896 03:40:29 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:10.896 03:40:29 -- common/autotest_common.sh@908 -- # force=-F 00:08:10.896 03:40:29 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:10.896 mke2fs 1.46.5 (30-Dec-2021) 00:08:11.156 Discarding device blocks: 0/522240 done 00:08:11.156 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:11.156 Filesystem UUID: 5b786aed-be55-4988-a9d8-e02e53fe3800 00:08:11.156 Superblock backups stored on blocks: 00:08:11.156 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:11.156 00:08:11.156 Allocating group tables: 0/64 done 00:08:11.156 Writing inode tables: 0/64 done 00:08:11.156 Creating journal (8192 blocks): done 00:08:11.156 Writing superblocks and filesystem accounting information: 0/64 done 00:08:11.156 00:08:11.156 03:40:29 -- common/autotest_common.sh@921 -- # return 0 00:08:11.156 03:40:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.415 03:40:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.415 03:40:30 -- target/filesystem.sh@25 -- # sync 00:08:11.415 03:40:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.415 03:40:30 -- target/filesystem.sh@27 -- # sync 00:08:11.415 03:40:30 -- target/filesystem.sh@29 -- # i=0 00:08:11.415 03:40:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.415 03:40:30 -- target/filesystem.sh@37 -- # kill -0 2281648 00:08:11.415 03:40:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.415 03:40:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.415 03:40:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.415 03:40:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.415 00:08:11.415 real 0m0.481s 00:08:11.415 user 0m0.014s 00:08:11.415 sys 0m0.064s 00:08:11.415 03:40:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.415 03:40:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.415 ************************************ 00:08:11.415 END TEST filesystem_ext4 00:08:11.415 ************************************ 00:08:11.415 03:40:30 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:11.415 03:40:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:11.415 03:40:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.415 03:40:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.415 ************************************ 00:08:11.415 START TEST filesystem_btrfs 00:08:11.415 ************************************ 00:08:11.415 03:40:30 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:11.415 03:40:30 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:11.415 03:40:30 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.415 03:40:30 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:11.415 03:40:30 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:11.415 03:40:30 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:11.415 03:40:30 -- common/autotest_common.sh@904 -- # local i=0 00:08:11.415 03:40:30 -- common/autotest_common.sh@905 -- # local force 00:08:11.415 03:40:30 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:11.415 03:40:30 -- common/autotest_common.sh@910 -- # force=-f 00:08:11.415 03:40:30 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:11.674 btrfs-progs v6.6.2 00:08:11.674 See https://btrfs.readthedocs.io for more information. 00:08:11.674 00:08:11.674 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:11.674 NOTE: several default settings have changed in version 5.15, please make sure 00:08:11.674 this does not affect your deployments: 00:08:11.674 - DUP for metadata (-m dup) 00:08:11.674 - enabled no-holes (-O no-holes) 00:08:11.674 - enabled free-space-tree (-R free-space-tree) 00:08:11.674 00:08:11.674 Label: (null) 00:08:11.674 UUID: 83724439-f860-4fa5-b867-b351893c24e5 00:08:11.674 Node size: 16384 00:08:11.674 Sector size: 4096 00:08:11.674 Filesystem size: 510.00MiB 00:08:11.674 Block group profiles: 00:08:11.674 Data: single 8.00MiB 00:08:11.674 Metadata: DUP 32.00MiB 00:08:11.674 System: DUP 8.00MiB 00:08:11.674 SSD detected: yes 00:08:11.674 Zoned device: no 00:08:11.674 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:11.674 Runtime features: free-space-tree 00:08:11.674 Checksum: crc32c 00:08:11.674 Number of devices: 1 00:08:11.674 Devices: 00:08:11.674 ID SIZE PATH 00:08:11.674 1 510.00MiB /dev/nvme0n1p1 00:08:11.674 00:08:11.674 03:40:30 -- common/autotest_common.sh@921 -- # return 0 00:08:11.674 03:40:30 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.240 03:40:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.240 03:40:31 -- target/filesystem.sh@25 -- # sync 00:08:12.240 03:40:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.240 03:40:31 -- target/filesystem.sh@27 -- # sync 00:08:12.240 03:40:31 -- target/filesystem.sh@29 -- # i=0 00:08:12.240 03:40:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.240 03:40:31 -- target/filesystem.sh@37 -- # kill -0 2281648 00:08:12.240 03:40:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.240 03:40:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.240 03:40:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.240 03:40:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.240 00:08:12.240 real 0m0.869s 00:08:12.240 user 0m0.026s 00:08:12.240 sys 0m0.106s 00:08:12.240 03:40:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.240 03:40:31 -- common/autotest_common.sh@10 -- # set +x 00:08:12.240 ************************************ 00:08:12.240 END TEST filesystem_btrfs 00:08:12.240 ************************************ 00:08:12.240 03:40:31 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:12.240 03:40:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:12.240 03:40:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.240 03:40:31 -- common/autotest_common.sh@10 -- # set +x 00:08:12.241 ************************************ 00:08:12.241 START TEST filesystem_xfs 00:08:12.241 ************************************ 00:08:12.241 03:40:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:12.241 03:40:31 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:12.241 03:40:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.241 03:40:31 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:12.241 03:40:31 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:12.241 03:40:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:12.241 03:40:31 -- common/autotest_common.sh@904 -- # local i=0 00:08:12.241 03:40:31 -- common/autotest_common.sh@905 -- # local force 00:08:12.241 03:40:31 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:12.241 03:40:31 -- common/autotest_common.sh@910 -- # force=-f 00:08:12.241 03:40:31 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:12.501 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:12.501 = sectsz=512 attr=2, projid32bit=1 00:08:12.501 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:12.501 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:12.501 data = bsize=4096 blocks=130560, imaxpct=25 00:08:12.501 = sunit=0 swidth=0 blks 00:08:12.501 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:12.501 log =internal log bsize=4096 blocks=16384, version=2 00:08:12.501 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:12.501 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:13.472 Discarding blocks...Done. 00:08:13.472 03:40:32 -- common/autotest_common.sh@921 -- # return 0 00:08:13.472 03:40:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.024 03:40:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.024 03:40:34 -- target/filesystem.sh@25 -- # sync 00:08:16.024 03:40:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.024 03:40:34 -- target/filesystem.sh@27 -- # sync 00:08:16.024 03:40:34 -- target/filesystem.sh@29 -- # i=0 00:08:16.024 03:40:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.024 03:40:34 -- target/filesystem.sh@37 -- # kill -0 2281648 00:08:16.024 03:40:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.024 03:40:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.024 03:40:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.024 03:40:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.024 00:08:16.024 real 0m3.422s 00:08:16.024 user 0m0.013s 00:08:16.024 sys 0m0.065s 00:08:16.024 03:40:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.024 03:40:34 -- common/autotest_common.sh@10 -- # set +x 00:08:16.024 ************************************ 00:08:16.024 END TEST filesystem_xfs 00:08:16.024 ************************************ 00:08:16.024 03:40:34 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:16.024 03:40:34 -- target/filesystem.sh@93 -- # sync 00:08:16.024 03:40:34 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:16.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.285 03:40:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:16.285 03:40:34 -- common/autotest_common.sh@1198 -- # local i=0 00:08:16.285 03:40:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:16.285 03:40:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.285 03:40:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:16.285 03:40:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.285 03:40:34 -- common/autotest_common.sh@1210 -- # return 0 00:08:16.285 03:40:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.285 03:40:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.285 03:40:34 -- common/autotest_common.sh@10 -- # set +x 00:08:16.285 03:40:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.285 03:40:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:16.285 03:40:34 -- target/filesystem.sh@101 -- # killprocess 2281648 00:08:16.285 03:40:34 -- common/autotest_common.sh@926 -- # '[' -z 2281648 ']' 00:08:16.285 03:40:34 -- common/autotest_common.sh@930 -- # kill -0 2281648 00:08:16.285 03:40:34 -- common/autotest_common.sh@931 -- # uname 00:08:16.285 03:40:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:16.285 03:40:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2281648 00:08:16.285 03:40:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:16.285 03:40:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:16.285 03:40:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2281648' 00:08:16.285 killing process with pid 2281648 00:08:16.285 03:40:35 -- common/autotest_common.sh@945 -- # kill 2281648 00:08:16.285 03:40:35 -- common/autotest_common.sh@950 -- # wait 2281648 00:08:16.546 03:40:35 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:16.546 00:08:16.546 real 0m11.774s 00:08:16.546 user 0m45.464s 00:08:16.546 sys 0m1.745s 00:08:16.546 03:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.546 03:40:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.546 ************************************ 00:08:16.546 END TEST nvmf_filesystem_no_in_capsule 00:08:16.546 ************************************ 00:08:16.546 03:40:35 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:16.546 03:40:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:16.546 03:40:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.546 03:40:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.546 ************************************ 00:08:16.546 START TEST nvmf_filesystem_in_capsule 00:08:16.546 ************************************ 00:08:16.546 03:40:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:16.546 03:40:35 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:16.546 03:40:35 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:16.546 03:40:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:16.546 03:40:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:16.546 03:40:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.806 03:40:35 -- nvmf/common.sh@469 -- # nvmfpid=2283246 00:08:16.806 03:40:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.806 03:40:35 -- nvmf/common.sh@470 -- # waitforlisten 2283246 00:08:16.806 03:40:35 -- common/autotest_common.sh@819 -- # '[' -z 2283246 ']' 00:08:16.806 03:40:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.806 03:40:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:16.806 03:40:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.806 03:40:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:16.806 03:40:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.806 [2024-07-14 03:40:35.533901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:16.806 [2024-07-14 03:40:35.533999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.806 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.806 [2024-07-14 03:40:35.601058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.806 [2024-07-14 03:40:35.688758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:16.806 [2024-07-14 03:40:35.688921] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.806 [2024-07-14 03:40:35.688940] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.806 [2024-07-14 03:40:35.688953] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.806 [2024-07-14 03:40:35.689009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.806 [2024-07-14 03:40:35.689067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.806 [2024-07-14 03:40:35.689097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.806 [2024-07-14 03:40:35.689099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.745 03:40:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:17.746 03:40:36 -- common/autotest_common.sh@852 -- # return 0 00:08:17.746 03:40:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:17.746 03:40:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:17.746 03:40:36 -- common/autotest_common.sh@10 -- # set +x 00:08:17.746 03:40:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.746 03:40:36 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:17.746 03:40:36 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:17.746 03:40:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.746 03:40:36 -- common/autotest_common.sh@10 -- # set +x 00:08:17.746 [2024-07-14 03:40:36.541547] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.746 03:40:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.746 03:40:36 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:17.746 03:40:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.746 03:40:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.005 Malloc1 00:08:18.005 03:40:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.005 03:40:36 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.005 03:40:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.005 03:40:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.005 03:40:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.005 03:40:36 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:18.005 03:40:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.005 03:40:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.005 03:40:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.005 03:40:36 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.005 03:40:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.005 03:40:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.005 [2024-07-14 03:40:36.715874] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.005 03:40:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.005 03:40:36 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:18.005 03:40:36 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:18.005 03:40:36 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:18.005 03:40:36 -- common/autotest_common.sh@1359 -- # local bs 00:08:18.005 03:40:36 -- common/autotest_common.sh@1360 -- # local nb 00:08:18.005 03:40:36 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:18.005 03:40:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.005 03:40:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.005 03:40:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.005 03:40:36 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:18.005 { 00:08:18.005 "name": "Malloc1", 00:08:18.005 "aliases": [ 00:08:18.005 "fc9e471d-203e-498f-bef5-31859b7517bb" 00:08:18.005 ], 00:08:18.005 "product_name": "Malloc disk", 00:08:18.005 "block_size": 512, 00:08:18.005 "num_blocks": 1048576, 00:08:18.005 "uuid": "fc9e471d-203e-498f-bef5-31859b7517bb", 00:08:18.005 "assigned_rate_limits": { 00:08:18.005 "rw_ios_per_sec": 0, 00:08:18.005 "rw_mbytes_per_sec": 0, 00:08:18.005 "r_mbytes_per_sec": 0, 00:08:18.005 "w_mbytes_per_sec": 0 00:08:18.005 }, 00:08:18.005 "claimed": true, 00:08:18.005 "claim_type": "exclusive_write", 00:08:18.005 "zoned": false, 00:08:18.005 "supported_io_types": { 00:08:18.005 "read": true, 00:08:18.005 "write": true, 00:08:18.005 "unmap": true, 00:08:18.005 "write_zeroes": true, 00:08:18.005 "flush": true, 00:08:18.005 "reset": true, 00:08:18.005 "compare": false, 00:08:18.005 "compare_and_write": false, 00:08:18.005 "abort": true, 00:08:18.005 "nvme_admin": false, 00:08:18.005 "nvme_io": false 00:08:18.005 }, 00:08:18.005 "memory_domains": [ 00:08:18.005 { 00:08:18.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.005 "dma_device_type": 2 00:08:18.005 } 00:08:18.005 ], 00:08:18.005 "driver_specific": {} 00:08:18.005 } 00:08:18.005 ]' 00:08:18.005 03:40:36 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:18.005 03:40:36 -- common/autotest_common.sh@1362 -- # bs=512 00:08:18.005 03:40:36 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:18.005 03:40:36 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:18.005 03:40:36 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:18.005 03:40:36 -- common/autotest_common.sh@1367 -- # echo 512 00:08:18.005 03:40:36 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:18.005 03:40:36 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.572 03:40:37 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.572 03:40:37 -- common/autotest_common.sh@1177 -- # local i=0 00:08:18.572 03:40:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.572 03:40:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:18.572 03:40:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:21.107 03:40:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:21.107 03:40:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:21.107 03:40:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.107 03:40:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:21.107 03:40:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.107 03:40:39 -- common/autotest_common.sh@1187 -- # return 0 00:08:21.107 03:40:39 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:21.107 03:40:39 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:21.107 03:40:39 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:21.107 03:40:39 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:21.107 03:40:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:21.107 03:40:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:21.107 03:40:39 -- setup/common.sh@80 -- # echo 536870912 00:08:21.107 03:40:39 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:21.107 03:40:39 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:21.107 03:40:39 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:21.107 03:40:39 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.107 03:40:39 -- target/filesystem.sh@69 -- # partprobe 00:08:21.674 03:40:40 -- target/filesystem.sh@70 -- # sleep 1 00:08:22.610 03:40:41 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:22.610 03:40:41 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:22.610 03:40:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:22.610 03:40:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.610 03:40:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.610 ************************************ 00:08:22.610 START TEST filesystem_in_capsule_ext4 00:08:22.610 ************************************ 00:08:22.610 03:40:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:22.610 03:40:41 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:22.610 03:40:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.610 03:40:41 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:22.610 03:40:41 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:22.610 03:40:41 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:22.610 03:40:41 -- common/autotest_common.sh@904 -- # local i=0 00:08:22.610 03:40:41 -- common/autotest_common.sh@905 -- # local force 00:08:22.610 03:40:41 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:22.610 03:40:41 -- common/autotest_common.sh@908 -- # force=-F 00:08:22.610 03:40:41 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:22.610 mke2fs 1.46.5 (30-Dec-2021) 00:08:22.610 Discarding device blocks: 0/522240 done 00:08:22.868 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:22.868 Filesystem UUID: 760cd8c4-89be-4f41-9061-e2d0298e20ed 00:08:22.868 Superblock backups stored on blocks: 00:08:22.868 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:22.868 00:08:22.868 Allocating group tables: 0/64 done 00:08:22.868 Writing inode tables: 0/64 done 00:08:22.868 Creating journal (8192 blocks): done 00:08:22.868 Writing superblocks and filesystem accounting information: 0/64 done 00:08:22.868 00:08:22.868 03:40:41 -- common/autotest_common.sh@921 -- # return 0 00:08:22.868 03:40:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.807 03:40:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.807 03:40:42 -- target/filesystem.sh@25 -- # sync 00:08:23.807 03:40:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.807 03:40:42 -- target/filesystem.sh@27 -- # sync 00:08:23.807 03:40:42 -- target/filesystem.sh@29 -- # i=0 00:08:23.807 03:40:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.807 03:40:42 -- target/filesystem.sh@37 -- # kill -0 2283246 00:08:23.807 03:40:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.807 03:40:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.807 03:40:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.807 03:40:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.807 00:08:23.807 real 0m1.160s 00:08:23.807 user 0m0.019s 00:08:23.807 sys 0m0.053s 00:08:23.807 03:40:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.807 03:40:42 -- common/autotest_common.sh@10 -- # set +x 00:08:23.807 ************************************ 00:08:23.807 END TEST filesystem_in_capsule_ext4 00:08:23.807 ************************************ 00:08:23.807 03:40:42 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:23.807 03:40:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:23.807 03:40:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.807 03:40:42 -- common/autotest_common.sh@10 -- # set +x 00:08:23.807 ************************************ 00:08:23.807 START TEST filesystem_in_capsule_btrfs 00:08:23.807 ************************************ 00:08:23.807 03:40:42 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:23.807 03:40:42 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:23.807 03:40:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.807 03:40:42 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:23.807 03:40:42 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:23.807 03:40:42 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:23.807 03:40:42 -- common/autotest_common.sh@904 -- # local i=0 00:08:23.807 03:40:42 -- common/autotest_common.sh@905 -- # local force 00:08:23.807 03:40:42 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:23.807 03:40:42 -- common/autotest_common.sh@910 -- # force=-f 00:08:23.807 03:40:42 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:24.065 btrfs-progs v6.6.2 00:08:24.065 See https://btrfs.readthedocs.io for more information. 00:08:24.065 00:08:24.065 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:24.065 NOTE: several default settings have changed in version 5.15, please make sure 00:08:24.065 this does not affect your deployments: 00:08:24.065 - DUP for metadata (-m dup) 00:08:24.065 - enabled no-holes (-O no-holes) 00:08:24.065 - enabled free-space-tree (-R free-space-tree) 00:08:24.065 00:08:24.065 Label: (null) 00:08:24.065 UUID: a9d45eb2-56ef-4d3b-9571-3529bd1d5fe9 00:08:24.065 Node size: 16384 00:08:24.065 Sector size: 4096 00:08:24.065 Filesystem size: 510.00MiB 00:08:24.065 Block group profiles: 00:08:24.065 Data: single 8.00MiB 00:08:24.065 Metadata: DUP 32.00MiB 00:08:24.065 System: DUP 8.00MiB 00:08:24.065 SSD detected: yes 00:08:24.065 Zoned device: no 00:08:24.065 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:24.065 Runtime features: free-space-tree 00:08:24.065 Checksum: crc32c 00:08:24.065 Number of devices: 1 00:08:24.065 Devices: 00:08:24.065 ID SIZE PATH 00:08:24.065 1 510.00MiB /dev/nvme0n1p1 00:08:24.065 00:08:24.065 03:40:42 -- common/autotest_common.sh@921 -- # return 0 00:08:24.065 03:40:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.001 03:40:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.001 03:40:43 -- target/filesystem.sh@25 -- # sync 00:08:25.001 03:40:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.001 03:40:43 -- target/filesystem.sh@27 -- # sync 00:08:25.001 03:40:43 -- target/filesystem.sh@29 -- # i=0 00:08:25.001 03:40:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.001 03:40:43 -- target/filesystem.sh@37 -- # kill -0 2283246 00:08:25.001 03:40:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.001 03:40:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.001 03:40:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.001 03:40:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.001 00:08:25.001 real 0m1.151s 00:08:25.001 user 0m0.029s 00:08:25.001 sys 0m0.101s 00:08:25.001 03:40:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.001 03:40:43 -- common/autotest_common.sh@10 -- # set +x 00:08:25.001 ************************************ 00:08:25.001 END TEST filesystem_in_capsule_btrfs 00:08:25.001 ************************************ 00:08:25.001 03:40:43 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:25.001 03:40:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.001 03:40:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.001 03:40:43 -- common/autotest_common.sh@10 -- # set +x 00:08:25.001 ************************************ 00:08:25.001 START TEST filesystem_in_capsule_xfs 00:08:25.001 ************************************ 00:08:25.001 03:40:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:25.001 03:40:43 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:25.001 03:40:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.001 03:40:43 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:25.001 03:40:43 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:25.001 03:40:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.001 03:40:43 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.001 03:40:43 -- common/autotest_common.sh@905 -- # local force 00:08:25.001 03:40:43 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:25.001 03:40:43 -- common/autotest_common.sh@910 -- # force=-f 00:08:25.001 03:40:43 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:25.001 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:25.001 = sectsz=512 attr=2, projid32bit=1 00:08:25.001 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:25.001 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:25.001 data = bsize=4096 blocks=130560, imaxpct=25 00:08:25.001 = sunit=0 swidth=0 blks 00:08:25.001 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:25.001 log =internal log bsize=4096 blocks=16384, version=2 00:08:25.001 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:25.001 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:25.935 Discarding blocks...Done. 00:08:25.935 03:40:44 -- common/autotest_common.sh@921 -- # return 0 00:08:25.935 03:40:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.462 03:40:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.462 03:40:46 -- target/filesystem.sh@25 -- # sync 00:08:28.462 03:40:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.462 03:40:46 -- target/filesystem.sh@27 -- # sync 00:08:28.462 03:40:46 -- target/filesystem.sh@29 -- # i=0 00:08:28.462 03:40:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.462 03:40:46 -- target/filesystem.sh@37 -- # kill -0 2283246 00:08:28.462 03:40:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.462 03:40:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.462 03:40:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.462 03:40:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.462 00:08:28.462 real 0m3.122s 00:08:28.462 user 0m0.012s 00:08:28.462 sys 0m0.069s 00:08:28.462 03:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.462 03:40:46 -- common/autotest_common.sh@10 -- # set +x 00:08:28.462 ************************************ 00:08:28.462 END TEST filesystem_in_capsule_xfs 00:08:28.462 ************************************ 00:08:28.462 03:40:46 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:28.462 03:40:46 -- target/filesystem.sh@93 -- # sync 00:08:28.462 03:40:47 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:28.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.462 03:40:47 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:28.462 03:40:47 -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.462 03:40:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:28.462 03:40:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.462 03:40:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:28.462 03:40:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.462 03:40:47 -- common/autotest_common.sh@1210 -- # return 0 00:08:28.462 03:40:47 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.462 03:40:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.462 03:40:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.462 03:40:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.462 03:40:47 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:28.462 03:40:47 -- target/filesystem.sh@101 -- # killprocess 2283246 00:08:28.462 03:40:47 -- common/autotest_common.sh@926 -- # '[' -z 2283246 ']' 00:08:28.462 03:40:47 -- common/autotest_common.sh@930 -- # kill -0 2283246 00:08:28.462 03:40:47 -- common/autotest_common.sh@931 -- # uname 00:08:28.462 03:40:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:28.462 03:40:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2283246 00:08:28.462 03:40:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:28.462 03:40:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:28.462 03:40:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2283246' 00:08:28.462 killing process with pid 2283246 00:08:28.462 03:40:47 -- common/autotest_common.sh@945 -- # kill 2283246 00:08:28.462 03:40:47 -- common/autotest_common.sh@950 -- # wait 2283246 00:08:28.721 03:40:47 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:28.721 00:08:28.721 real 0m12.095s 00:08:28.721 user 0m46.672s 00:08:28.721 sys 0m1.773s 00:08:28.721 03:40:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.721 03:40:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.721 ************************************ 00:08:28.721 END TEST nvmf_filesystem_in_capsule 00:08:28.721 ************************************ 00:08:28.721 03:40:47 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:28.721 03:40:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:28.721 03:40:47 -- nvmf/common.sh@116 -- # sync 00:08:28.721 03:40:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:28.721 03:40:47 -- nvmf/common.sh@119 -- # set +e 00:08:28.721 03:40:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:28.721 03:40:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:28.721 rmmod nvme_tcp 00:08:28.721 rmmod nvme_fabrics 00:08:28.721 rmmod nvme_keyring 00:08:28.721 03:40:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:28.721 03:40:47 -- nvmf/common.sh@123 -- # set -e 00:08:28.981 03:40:47 -- nvmf/common.sh@124 -- # return 0 00:08:28.981 03:40:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:28.981 03:40:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:28.981 03:40:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:28.981 03:40:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:28.981 03:40:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.981 03:40:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:28.981 03:40:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.981 03:40:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.981 03:40:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.941 03:40:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:30.941 00:08:30.941 real 0m28.324s 00:08:30.941 user 1m33.019s 00:08:30.941 sys 0m5.084s 00:08:30.941 03:40:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.941 03:40:49 -- common/autotest_common.sh@10 -- # set +x 00:08:30.941 ************************************ 00:08:30.941 END TEST nvmf_filesystem 00:08:30.941 ************************************ 00:08:30.941 03:40:49 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:30.941 03:40:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:30.941 03:40:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.942 03:40:49 -- common/autotest_common.sh@10 -- # set +x 00:08:30.942 ************************************ 00:08:30.942 START TEST nvmf_discovery 00:08:30.942 ************************************ 00:08:30.942 03:40:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:30.942 * Looking for test storage... 00:08:30.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.942 03:40:49 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.942 03:40:49 -- nvmf/common.sh@7 -- # uname -s 00:08:30.942 03:40:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.942 03:40:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.942 03:40:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.942 03:40:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.942 03:40:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.942 03:40:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.942 03:40:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.942 03:40:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.942 03:40:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.942 03:40:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.942 03:40:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.942 03:40:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.942 03:40:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.942 03:40:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.942 03:40:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.942 03:40:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.942 03:40:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.942 03:40:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.942 03:40:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.942 03:40:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.942 03:40:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.942 03:40:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.942 03:40:49 -- paths/export.sh@5 -- # export PATH 00:08:30.942 03:40:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.942 03:40:49 -- nvmf/common.sh@46 -- # : 0 00:08:30.942 03:40:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:30.942 03:40:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:30.942 03:40:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:30.942 03:40:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.942 03:40:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.942 03:40:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:30.942 03:40:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:30.942 03:40:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:30.942 03:40:49 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:30.942 03:40:49 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:30.942 03:40:49 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:30.942 03:40:49 -- target/discovery.sh@15 -- # hash nvme 00:08:30.942 03:40:49 -- target/discovery.sh@20 -- # nvmftestinit 00:08:30.942 03:40:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:30.942 03:40:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.942 03:40:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:30.942 03:40:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:30.942 03:40:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:30.942 03:40:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.942 03:40:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.942 03:40:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.942 03:40:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:30.942 03:40:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:30.942 03:40:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:30.942 03:40:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.843 03:40:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:32.843 03:40:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:32.843 03:40:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:32.843 03:40:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:32.844 03:40:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:32.844 03:40:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:32.844 03:40:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:32.844 03:40:51 -- nvmf/common.sh@294 -- # net_devs=() 00:08:32.844 03:40:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:32.844 03:40:51 -- nvmf/common.sh@295 -- # e810=() 00:08:32.844 03:40:51 -- nvmf/common.sh@295 -- # local -ga e810 00:08:32.844 03:40:51 -- nvmf/common.sh@296 -- # x722=() 00:08:32.844 03:40:51 -- nvmf/common.sh@296 -- # local -ga x722 00:08:32.844 03:40:51 -- nvmf/common.sh@297 -- # mlx=() 00:08:32.844 03:40:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:32.844 03:40:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.844 03:40:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:32.844 03:40:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:32.844 03:40:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:32.844 03:40:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:32.844 03:40:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.844 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.844 03:40:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:32.844 03:40:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.844 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.844 03:40:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:32.844 03:40:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:32.844 03:40:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.844 03:40:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:32.844 03:40:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.844 03:40:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.844 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.844 03:40:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.844 03:40:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:32.844 03:40:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.844 03:40:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:32.844 03:40:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.844 03:40:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.844 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.844 03:40:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.844 03:40:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:32.844 03:40:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:32.844 03:40:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:32.844 03:40:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:32.844 03:40:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.844 03:40:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.844 03:40:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.844 03:40:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:32.844 03:40:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.844 03:40:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.844 03:40:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:32.844 03:40:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.844 03:40:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.844 03:40:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:32.844 03:40:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:32.844 03:40:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.844 03:40:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.101 03:40:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.101 03:40:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.101 03:40:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:33.101 03:40:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.101 03:40:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.101 03:40:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.101 03:40:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:33.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:08:33.101 00:08:33.101 --- 10.0.0.2 ping statistics --- 00:08:33.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.101 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:33.101 03:40:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:33.101 00:08:33.101 --- 10.0.0.1 ping statistics --- 00:08:33.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.101 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:33.101 03:40:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.101 03:40:51 -- nvmf/common.sh@410 -- # return 0 00:08:33.101 03:40:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:33.101 03:40:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.101 03:40:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:33.101 03:40:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:33.101 03:40:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.102 03:40:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:33.102 03:40:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:33.102 03:40:51 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:33.102 03:40:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:33.102 03:40:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:33.102 03:40:51 -- common/autotest_common.sh@10 -- # set +x 00:08:33.102 03:40:51 -- nvmf/common.sh@469 -- # nvmfpid=2286774 00:08:33.102 03:40:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.102 03:40:51 -- nvmf/common.sh@470 -- # waitforlisten 2286774 00:08:33.102 03:40:51 -- common/autotest_common.sh@819 -- # '[' -z 2286774 ']' 00:08:33.102 03:40:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.102 03:40:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:33.102 03:40:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.102 03:40:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:33.102 03:40:51 -- common/autotest_common.sh@10 -- # set +x 00:08:33.102 [2024-07-14 03:40:51.970941] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:33.102 [2024-07-14 03:40:51.971023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.102 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.102 [2024-07-14 03:40:52.041588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.358 [2024-07-14 03:40:52.136954] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.358 [2024-07-14 03:40:52.137114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.358 [2024-07-14 03:40:52.137134] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.358 [2024-07-14 03:40:52.137149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.358 [2024-07-14 03:40:52.137224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.358 [2024-07-14 03:40:52.137252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.358 [2024-07-14 03:40:52.137293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.358 [2024-07-14 03:40:52.137297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.289 03:40:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:34.289 03:40:52 -- common/autotest_common.sh@852 -- # return 0 00:08:34.289 03:40:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:34.289 03:40:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:34.289 03:40:52 -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 03:40:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.289 03:40:52 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.289 03:40:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.289 03:40:52 -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 [2024-07-14 03:40:52.976676] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.289 03:40:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.289 03:40:52 -- target/discovery.sh@26 -- # seq 1 4 00:08:34.289 03:40:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.289 03:40:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:34.289 03:40:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.289 03:40:52 -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 Null1 00:08:34.289 03:40:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.289 03:40:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:34.289 03:40:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.289 03:40:52 -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.289 03:40:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:34.289 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.289 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.289 03:40:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.289 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.289 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.289 [2024-07-14 03:40:53.016980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.289 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.289 03:40:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.290 03:40:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 Null2 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.290 03:40:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 Null3 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.290 03:40:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 Null4 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.290 03:40:53 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:34.290 00:08:34.290 Discovery Log Number of Records 6, Generation counter 6 00:08:34.290 =====Discovery Log Entry 0====== 00:08:34.290 trtype: tcp 00:08:34.290 adrfam: ipv4 00:08:34.290 subtype: current discovery subsystem 00:08:34.290 treq: not required 00:08:34.290 portid: 0 00:08:34.290 trsvcid: 4420 00:08:34.290 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:34.290 traddr: 10.0.0.2 00:08:34.290 eflags: explicit discovery connections, duplicate discovery information 00:08:34.290 sectype: none 00:08:34.290 =====Discovery Log Entry 1====== 00:08:34.290 trtype: tcp 00:08:34.290 adrfam: ipv4 00:08:34.290 subtype: nvme subsystem 00:08:34.290 treq: not required 00:08:34.290 portid: 0 00:08:34.290 trsvcid: 4420 00:08:34.290 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:34.290 traddr: 10.0.0.2 00:08:34.290 eflags: none 00:08:34.290 sectype: none 00:08:34.290 =====Discovery Log Entry 2====== 00:08:34.290 trtype: tcp 00:08:34.290 adrfam: ipv4 00:08:34.290 subtype: nvme subsystem 00:08:34.290 treq: not required 00:08:34.290 portid: 0 00:08:34.290 trsvcid: 4420 00:08:34.290 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:34.290 traddr: 10.0.0.2 00:08:34.290 eflags: none 00:08:34.290 sectype: none 00:08:34.290 =====Discovery Log Entry 3====== 00:08:34.290 trtype: tcp 00:08:34.290 adrfam: ipv4 00:08:34.290 subtype: nvme subsystem 00:08:34.290 treq: not required 00:08:34.290 portid: 0 00:08:34.290 trsvcid: 4420 00:08:34.290 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:34.290 traddr: 10.0.0.2 00:08:34.290 eflags: none 00:08:34.290 sectype: none 00:08:34.290 =====Discovery Log Entry 4====== 00:08:34.290 trtype: tcp 00:08:34.290 adrfam: ipv4 00:08:34.290 subtype: nvme subsystem 00:08:34.290 treq: not required 00:08:34.290 portid: 0 00:08:34.290 trsvcid: 4420 00:08:34.290 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:34.290 traddr: 10.0.0.2 00:08:34.290 eflags: none 00:08:34.290 sectype: none 00:08:34.290 =====Discovery Log Entry 5====== 00:08:34.290 trtype: tcp 00:08:34.290 adrfam: ipv4 00:08:34.290 subtype: discovery subsystem referral 00:08:34.290 treq: not required 00:08:34.290 portid: 0 00:08:34.290 trsvcid: 4430 00:08:34.290 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:34.290 traddr: 10.0.0.2 00:08:34.290 eflags: none 00:08:34.290 sectype: none 00:08:34.290 03:40:53 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:34.290 Perform nvmf subsystem discovery via RPC 00:08:34.290 03:40:53 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:34.290 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.290 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.290 [2024-07-14 03:40:53.221461] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:34.290 [ 00:08:34.290 { 00:08:34.290 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:34.290 "subtype": "Discovery", 00:08:34.290 "listen_addresses": [ 00:08:34.290 { 00:08:34.290 "transport": "TCP", 00:08:34.290 "trtype": "TCP", 00:08:34.290 "adrfam": "IPv4", 00:08:34.290 "traddr": "10.0.0.2", 00:08:34.290 "trsvcid": "4420" 00:08:34.290 } 00:08:34.290 ], 00:08:34.290 "allow_any_host": true, 00:08:34.290 "hosts": [] 00:08:34.290 }, 00:08:34.290 { 00:08:34.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:34.290 "subtype": "NVMe", 00:08:34.290 "listen_addresses": [ 00:08:34.290 { 00:08:34.290 "transport": "TCP", 00:08:34.290 "trtype": "TCP", 00:08:34.290 "adrfam": "IPv4", 00:08:34.290 "traddr": "10.0.0.2", 00:08:34.290 "trsvcid": "4420" 00:08:34.290 } 00:08:34.290 ], 00:08:34.290 "allow_any_host": true, 00:08:34.290 "hosts": [], 00:08:34.290 "serial_number": "SPDK00000000000001", 00:08:34.290 "model_number": "SPDK bdev Controller", 00:08:34.290 "max_namespaces": 32, 00:08:34.290 "min_cntlid": 1, 00:08:34.290 "max_cntlid": 65519, 00:08:34.290 "namespaces": [ 00:08:34.290 { 00:08:34.290 "nsid": 1, 00:08:34.290 "bdev_name": "Null1", 00:08:34.290 "name": "Null1", 00:08:34.290 "nguid": "60CFA89F414041238BFC3DBD442839A5", 00:08:34.290 "uuid": "60cfa89f-4140-4123-8bfc-3dbd442839a5" 00:08:34.290 } 00:08:34.290 ] 00:08:34.290 }, 00:08:34.290 { 00:08:34.550 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:34.550 "subtype": "NVMe", 00:08:34.550 "listen_addresses": [ 00:08:34.550 { 00:08:34.550 "transport": "TCP", 00:08:34.550 "trtype": "TCP", 00:08:34.550 "adrfam": "IPv4", 00:08:34.550 "traddr": "10.0.0.2", 00:08:34.550 "trsvcid": "4420" 00:08:34.550 } 00:08:34.550 ], 00:08:34.550 "allow_any_host": true, 00:08:34.550 "hosts": [], 00:08:34.550 "serial_number": "SPDK00000000000002", 00:08:34.550 "model_number": "SPDK bdev Controller", 00:08:34.550 "max_namespaces": 32, 00:08:34.550 "min_cntlid": 1, 00:08:34.550 "max_cntlid": 65519, 00:08:34.550 "namespaces": [ 00:08:34.550 { 00:08:34.550 "nsid": 1, 00:08:34.550 "bdev_name": "Null2", 00:08:34.550 "name": "Null2", 00:08:34.550 "nguid": "1691F3A093A249E99AD1E5112BED35C3", 00:08:34.550 "uuid": "1691f3a0-93a2-49e9-9ad1-e5112bed35c3" 00:08:34.550 } 00:08:34.550 ] 00:08:34.550 }, 00:08:34.550 { 00:08:34.550 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:34.550 "subtype": "NVMe", 00:08:34.550 "listen_addresses": [ 00:08:34.550 { 00:08:34.550 "transport": "TCP", 00:08:34.550 "trtype": "TCP", 00:08:34.550 "adrfam": "IPv4", 00:08:34.550 "traddr": "10.0.0.2", 00:08:34.550 "trsvcid": "4420" 00:08:34.550 } 00:08:34.550 ], 00:08:34.550 "allow_any_host": true, 00:08:34.550 "hosts": [], 00:08:34.550 "serial_number": "SPDK00000000000003", 00:08:34.550 "model_number": "SPDK bdev Controller", 00:08:34.550 "max_namespaces": 32, 00:08:34.550 "min_cntlid": 1, 00:08:34.550 "max_cntlid": 65519, 00:08:34.550 "namespaces": [ 00:08:34.550 { 00:08:34.550 "nsid": 1, 00:08:34.550 "bdev_name": "Null3", 00:08:34.550 "name": "Null3", 00:08:34.550 "nguid": "F067B9DAD517429685E32B6118FE0CD7", 00:08:34.550 "uuid": "f067b9da-d517-4296-85e3-2b6118fe0cd7" 00:08:34.550 } 00:08:34.550 ] 00:08:34.550 }, 00:08:34.550 { 00:08:34.550 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:34.550 "subtype": "NVMe", 00:08:34.550 "listen_addresses": [ 00:08:34.550 { 00:08:34.550 "transport": "TCP", 00:08:34.550 "trtype": "TCP", 00:08:34.550 "adrfam": "IPv4", 00:08:34.550 "traddr": "10.0.0.2", 00:08:34.550 "trsvcid": "4420" 00:08:34.550 } 00:08:34.550 ], 00:08:34.550 "allow_any_host": true, 00:08:34.550 "hosts": [], 00:08:34.550 "serial_number": "SPDK00000000000004", 00:08:34.550 "model_number": "SPDK bdev Controller", 00:08:34.550 "max_namespaces": 32, 00:08:34.550 "min_cntlid": 1, 00:08:34.550 "max_cntlid": 65519, 00:08:34.550 "namespaces": [ 00:08:34.550 { 00:08:34.550 "nsid": 1, 00:08:34.550 "bdev_name": "Null4", 00:08:34.550 "name": "Null4", 00:08:34.550 "nguid": "934D74D7FCD24E49ABC61B521E9B95A8", 00:08:34.550 "uuid": "934d74d7-fcd2-4e49-abc6-1b521e9b95a8" 00:08:34.550 } 00:08:34.550 ] 00:08:34.550 } 00:08:34.550 ] 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@42 -- # seq 1 4 00:08:34.550 03:40:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.550 03:40:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.550 03:40:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.550 03:40:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.550 03:40:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:34.550 03:40:53 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:34.550 03:40:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.550 03:40:53 -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 03:40:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.550 03:40:53 -- target/discovery.sh@49 -- # check_bdevs= 00:08:34.550 03:40:53 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:34.550 03:40:53 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:34.550 03:40:53 -- target/discovery.sh@57 -- # nvmftestfini 00:08:34.550 03:40:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:34.550 03:40:53 -- nvmf/common.sh@116 -- # sync 00:08:34.550 03:40:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:34.550 03:40:53 -- nvmf/common.sh@119 -- # set +e 00:08:34.550 03:40:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:34.550 03:40:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:34.550 rmmod nvme_tcp 00:08:34.550 rmmod nvme_fabrics 00:08:34.550 rmmod nvme_keyring 00:08:34.550 03:40:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:34.550 03:40:53 -- nvmf/common.sh@123 -- # set -e 00:08:34.550 03:40:53 -- nvmf/common.sh@124 -- # return 0 00:08:34.550 03:40:53 -- nvmf/common.sh@477 -- # '[' -n 2286774 ']' 00:08:34.550 03:40:53 -- nvmf/common.sh@478 -- # killprocess 2286774 00:08:34.550 03:40:53 -- common/autotest_common.sh@926 -- # '[' -z 2286774 ']' 00:08:34.550 03:40:53 -- common/autotest_common.sh@930 -- # kill -0 2286774 00:08:34.550 03:40:53 -- common/autotest_common.sh@931 -- # uname 00:08:34.550 03:40:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:34.550 03:40:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2286774 00:08:34.550 03:40:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:34.550 03:40:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:34.550 03:40:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2286774' 00:08:34.550 killing process with pid 2286774 00:08:34.550 03:40:53 -- common/autotest_common.sh@945 -- # kill 2286774 00:08:34.550 [2024-07-14 03:40:53.423436] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:34.550 03:40:53 -- common/autotest_common.sh@950 -- # wait 2286774 00:08:34.808 03:40:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:34.808 03:40:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:34.808 03:40:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:34.808 03:40:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.808 03:40:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:34.808 03:40:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.808 03:40:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.808 03:40:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.344 03:40:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:37.344 00:08:37.344 real 0m5.973s 00:08:37.344 user 0m7.059s 00:08:37.344 sys 0m1.801s 00:08:37.344 03:40:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.344 03:40:55 -- common/autotest_common.sh@10 -- # set +x 00:08:37.344 ************************************ 00:08:37.344 END TEST nvmf_discovery 00:08:37.344 ************************************ 00:08:37.344 03:40:55 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:37.344 03:40:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:37.344 03:40:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.344 03:40:55 -- common/autotest_common.sh@10 -- # set +x 00:08:37.344 ************************************ 00:08:37.344 START TEST nvmf_referrals 00:08:37.344 ************************************ 00:08:37.344 03:40:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:37.344 * Looking for test storage... 00:08:37.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.344 03:40:55 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.344 03:40:55 -- nvmf/common.sh@7 -- # uname -s 00:08:37.344 03:40:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.344 03:40:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.344 03:40:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.344 03:40:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.344 03:40:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.344 03:40:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.344 03:40:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.344 03:40:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.344 03:40:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.344 03:40:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.344 03:40:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.344 03:40:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.344 03:40:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.344 03:40:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.344 03:40:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.344 03:40:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.344 03:40:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.344 03:40:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.344 03:40:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.344 03:40:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 03:40:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 03:40:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 03:40:55 -- paths/export.sh@5 -- # export PATH 00:08:37.344 03:40:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 03:40:55 -- nvmf/common.sh@46 -- # : 0 00:08:37.344 03:40:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.344 03:40:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.344 03:40:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.344 03:40:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.344 03:40:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.344 03:40:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.344 03:40:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.344 03:40:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.344 03:40:55 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:37.344 03:40:55 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:37.344 03:40:55 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:37.344 03:40:55 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:37.344 03:40:55 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:37.344 03:40:55 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:37.344 03:40:55 -- target/referrals.sh@37 -- # nvmftestinit 00:08:37.344 03:40:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:37.344 03:40:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.344 03:40:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:37.344 03:40:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:37.344 03:40:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:37.344 03:40:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.344 03:40:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.344 03:40:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.344 03:40:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:37.344 03:40:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:37.344 03:40:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:37.344 03:40:55 -- common/autotest_common.sh@10 -- # set +x 00:08:39.245 03:40:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:39.245 03:40:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:39.245 03:40:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:39.245 03:40:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:39.245 03:40:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:39.245 03:40:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:39.245 03:40:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:39.245 03:40:57 -- nvmf/common.sh@294 -- # net_devs=() 00:08:39.245 03:40:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:39.245 03:40:57 -- nvmf/common.sh@295 -- # e810=() 00:08:39.245 03:40:57 -- nvmf/common.sh@295 -- # local -ga e810 00:08:39.245 03:40:57 -- nvmf/common.sh@296 -- # x722=() 00:08:39.245 03:40:57 -- nvmf/common.sh@296 -- # local -ga x722 00:08:39.245 03:40:57 -- nvmf/common.sh@297 -- # mlx=() 00:08:39.245 03:40:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:39.245 03:40:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.245 03:40:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:39.245 03:40:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:39.245 03:40:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:39.245 03:40:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:39.245 03:40:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.245 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.245 03:40:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:39.245 03:40:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.245 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.245 03:40:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:39.245 03:40:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:39.245 03:40:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.245 03:40:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:39.245 03:40:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.245 03:40:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.245 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.245 03:40:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.245 03:40:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:39.245 03:40:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.245 03:40:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:39.245 03:40:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.245 03:40:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.245 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.245 03:40:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.245 03:40:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:39.245 03:40:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:39.245 03:40:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:39.245 03:40:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:39.245 03:40:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.245 03:40:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.245 03:40:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.245 03:40:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:39.245 03:40:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.245 03:40:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.245 03:40:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:39.245 03:40:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.245 03:40:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.245 03:40:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:39.245 03:40:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:39.245 03:40:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.245 03:40:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.245 03:40:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.246 03:40:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.246 03:40:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:39.246 03:40:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.246 03:40:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.246 03:40:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.246 03:40:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:39.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:08:39.246 00:08:39.246 --- 10.0.0.2 ping statistics --- 00:08:39.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.246 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:39.246 03:40:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:39.246 00:08:39.246 --- 10.0.0.1 ping statistics --- 00:08:39.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.246 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:39.246 03:40:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.246 03:40:57 -- nvmf/common.sh@410 -- # return 0 00:08:39.246 03:40:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:39.246 03:40:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.246 03:40:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:39.246 03:40:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:39.246 03:40:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.246 03:40:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:39.246 03:40:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:39.246 03:40:57 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:39.246 03:40:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:39.246 03:40:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:39.246 03:40:57 -- common/autotest_common.sh@10 -- # set +x 00:08:39.246 03:40:57 -- nvmf/common.sh@469 -- # nvmfpid=2289013 00:08:39.246 03:40:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.246 03:40:57 -- nvmf/common.sh@470 -- # waitforlisten 2289013 00:08:39.246 03:40:57 -- common/autotest_common.sh@819 -- # '[' -z 2289013 ']' 00:08:39.246 03:40:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.246 03:40:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:39.246 03:40:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.246 03:40:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:39.246 03:40:57 -- common/autotest_common.sh@10 -- # set +x 00:08:39.246 [2024-07-14 03:40:58.031788] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:39.246 [2024-07-14 03:40:58.031854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.246 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.246 [2024-07-14 03:40:58.096802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.504 [2024-07-14 03:40:58.189991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.504 [2024-07-14 03:40:58.190136] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.504 [2024-07-14 03:40:58.190154] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.504 [2024-07-14 03:40:58.190167] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.504 [2024-07-14 03:40:58.190219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.504 [2024-07-14 03:40:58.190272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.504 [2024-07-14 03:40:58.190302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.504 [2024-07-14 03:40:58.190303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.069 03:40:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:40.069 03:40:59 -- common/autotest_common.sh@852 -- # return 0 00:08:40.069 03:40:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:40.069 03:40:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:40.069 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 03:40:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.352 03:40:59 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.352 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.352 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 [2024-07-14 03:40:59.031499] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.352 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:40.352 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.352 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 [2024-07-14 03:40:59.043681] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:40.352 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.352 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.352 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.352 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.352 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.352 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.352 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.352 03:40:59 -- target/referrals.sh@48 -- # jq length 00:08:40.352 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.352 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:40.352 03:40:59 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:40.352 03:40:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.352 03:40:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.352 03:40:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.352 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.352 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.352 03:40:59 -- target/referrals.sh@21 -- # sort 00:08:40.352 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.352 03:40:59 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.352 03:40:59 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:40.352 03:40:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.352 03:40:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.352 03:40:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.352 03:40:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.352 03:40:59 -- target/referrals.sh@26 -- # sort 00:08:40.609 03:40:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.609 03:40:59 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.609 03:40:59 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.609 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.609 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.609 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.609 03:40:59 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.609 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.609 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.609 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.609 03:40:59 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.609 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.609 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.609 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.609 03:40:59 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.609 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.609 03:40:59 -- target/referrals.sh@56 -- # jq length 00:08:40.609 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.609 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.609 03:40:59 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:40.609 03:40:59 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:40.609 03:40:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.609 03:40:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.609 03:40:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.609 03:40:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.609 03:40:59 -- target/referrals.sh@26 -- # sort 00:08:40.609 03:40:59 -- target/referrals.sh@26 -- # echo 00:08:40.609 03:40:59 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:40.609 03:40:59 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:40.609 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.609 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.609 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.609 03:40:59 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.609 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.609 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.609 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.609 03:40:59 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:40.609 03:40:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.609 03:40:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.609 03:40:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.609 03:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.866 03:40:59 -- common/autotest_common.sh@10 -- # set +x 00:08:40.866 03:40:59 -- target/referrals.sh@21 -- # sort 00:08:40.867 03:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.867 03:40:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:40.867 03:40:59 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.867 03:40:59 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:40.867 03:40:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.867 03:40:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.867 03:40:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.867 03:40:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.867 03:40:59 -- target/referrals.sh@26 -- # sort 00:08:40.867 03:40:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:40.867 03:40:59 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.867 03:40:59 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:40.867 03:40:59 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:40.867 03:40:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.867 03:40:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.867 03:40:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.124 03:40:59 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:41.124 03:40:59 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.124 03:40:59 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.124 03:40:59 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:41.124 03:40:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.124 03:40:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:41.124 03:41:00 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:41.124 03:41:00 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:41.124 03:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.124 03:41:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.124 03:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.124 03:41:00 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:41.124 03:41:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:41.124 03:41:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.124 03:41:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:41.124 03:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.124 03:41:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.124 03:41:00 -- target/referrals.sh@21 -- # sort 00:08:41.124 03:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.124 03:41:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:41.124 03:41:00 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.124 03:41:00 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:41.124 03:41:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.124 03:41:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.124 03:41:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.124 03:41:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.125 03:41:00 -- target/referrals.sh@26 -- # sort 00:08:41.382 03:41:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:41.382 03:41:00 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.382 03:41:00 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:41.382 03:41:00 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:41.382 03:41:00 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:41.382 03:41:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.382 03:41:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.382 03:41:00 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:41.382 03:41:00 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.382 03:41:00 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:41.382 03:41:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.382 03:41:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.382 03:41:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:41.382 03:41:00 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:41.382 03:41:00 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:41.382 03:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.382 03:41:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.382 03:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.382 03:41:00 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.382 03:41:00 -- target/referrals.sh@82 -- # jq length 00:08:41.382 03:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.382 03:41:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.383 03:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.641 03:41:00 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:41.641 03:41:00 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:41.641 03:41:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.641 03:41:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.641 03:41:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.641 03:41:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.641 03:41:00 -- target/referrals.sh@26 -- # sort 00:08:41.641 03:41:00 -- target/referrals.sh@26 -- # echo 00:08:41.641 03:41:00 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:41.641 03:41:00 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:41.641 03:41:00 -- target/referrals.sh@86 -- # nvmftestfini 00:08:41.641 03:41:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:41.641 03:41:00 -- nvmf/common.sh@116 -- # sync 00:08:41.641 03:41:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:41.641 03:41:00 -- nvmf/common.sh@119 -- # set +e 00:08:41.641 03:41:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:41.641 03:41:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:41.641 rmmod nvme_tcp 00:08:41.641 rmmod nvme_fabrics 00:08:41.641 rmmod nvme_keyring 00:08:41.641 03:41:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:41.641 03:41:00 -- nvmf/common.sh@123 -- # set -e 00:08:41.641 03:41:00 -- nvmf/common.sh@124 -- # return 0 00:08:41.641 03:41:00 -- nvmf/common.sh@477 -- # '[' -n 2289013 ']' 00:08:41.641 03:41:00 -- nvmf/common.sh@478 -- # killprocess 2289013 00:08:41.641 03:41:00 -- common/autotest_common.sh@926 -- # '[' -z 2289013 ']' 00:08:41.641 03:41:00 -- common/autotest_common.sh@930 -- # kill -0 2289013 00:08:41.641 03:41:00 -- common/autotest_common.sh@931 -- # uname 00:08:41.641 03:41:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:41.641 03:41:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2289013 00:08:41.641 03:41:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:41.641 03:41:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:41.641 03:41:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2289013' 00:08:41.641 killing process with pid 2289013 00:08:41.641 03:41:00 -- common/autotest_common.sh@945 -- # kill 2289013 00:08:41.641 03:41:00 -- common/autotest_common.sh@950 -- # wait 2289013 00:08:41.900 03:41:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:41.900 03:41:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:41.900 03:41:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:41.900 03:41:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.900 03:41:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:41.900 03:41:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.900 03:41:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.900 03:41:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.434 03:41:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:44.434 00:08:44.434 real 0m7.032s 00:08:44.434 user 0m11.568s 00:08:44.434 sys 0m2.223s 00:08:44.434 03:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.434 03:41:02 -- common/autotest_common.sh@10 -- # set +x 00:08:44.434 ************************************ 00:08:44.434 END TEST nvmf_referrals 00:08:44.434 ************************************ 00:08:44.434 03:41:02 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:44.434 03:41:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:44.434 03:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.434 03:41:02 -- common/autotest_common.sh@10 -- # set +x 00:08:44.434 ************************************ 00:08:44.434 START TEST nvmf_connect_disconnect 00:08:44.434 ************************************ 00:08:44.434 03:41:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:44.434 * Looking for test storage... 00:08:44.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.434 03:41:02 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.434 03:41:02 -- nvmf/common.sh@7 -- # uname -s 00:08:44.434 03:41:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.434 03:41:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.434 03:41:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.434 03:41:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.434 03:41:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.434 03:41:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.434 03:41:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.434 03:41:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.434 03:41:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.434 03:41:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.434 03:41:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.434 03:41:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.434 03:41:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.434 03:41:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.434 03:41:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.434 03:41:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.434 03:41:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.434 03:41:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.434 03:41:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.434 03:41:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.434 03:41:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.434 03:41:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.434 03:41:02 -- paths/export.sh@5 -- # export PATH 00:08:44.434 03:41:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.434 03:41:02 -- nvmf/common.sh@46 -- # : 0 00:08:44.434 03:41:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.434 03:41:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.434 03:41:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.434 03:41:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.434 03:41:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.434 03:41:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.434 03:41:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.434 03:41:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.434 03:41:02 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.434 03:41:02 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.434 03:41:02 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:44.434 03:41:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:44.434 03:41:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.434 03:41:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:44.434 03:41:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:44.434 03:41:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:44.434 03:41:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.434 03:41:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.434 03:41:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.434 03:41:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:44.434 03:41:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:44.434 03:41:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:44.434 03:41:02 -- common/autotest_common.sh@10 -- # set +x 00:08:45.809 03:41:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:45.809 03:41:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:45.809 03:41:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:45.809 03:41:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:45.809 03:41:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:45.809 03:41:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:45.809 03:41:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:45.809 03:41:04 -- nvmf/common.sh@294 -- # net_devs=() 00:08:45.809 03:41:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:45.809 03:41:04 -- nvmf/common.sh@295 -- # e810=() 00:08:45.809 03:41:04 -- nvmf/common.sh@295 -- # local -ga e810 00:08:45.809 03:41:04 -- nvmf/common.sh@296 -- # x722=() 00:08:45.809 03:41:04 -- nvmf/common.sh@296 -- # local -ga x722 00:08:45.809 03:41:04 -- nvmf/common.sh@297 -- # mlx=() 00:08:45.809 03:41:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:45.809 03:41:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.809 03:41:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:45.809 03:41:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:45.809 03:41:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:45.809 03:41:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:45.809 03:41:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.809 03:41:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:45.809 03:41:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.809 03:41:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:45.809 03:41:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:45.809 03:41:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.809 03:41:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:45.809 03:41:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.809 03:41:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.809 03:41:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.809 03:41:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:45.809 03:41:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.809 03:41:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:45.809 03:41:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.809 03:41:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.809 03:41:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.809 03:41:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:45.809 03:41:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:45.809 03:41:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:45.809 03:41:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:45.809 03:41:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.809 03:41:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.809 03:41:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.809 03:41:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:45.809 03:41:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.809 03:41:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.809 03:41:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:45.809 03:41:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.809 03:41:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.809 03:41:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:46.069 03:41:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:46.069 03:41:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.069 03:41:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.069 03:41:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.069 03:41:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.069 03:41:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:46.069 03:41:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.069 03:41:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.069 03:41:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.069 03:41:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:46.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:08:46.069 00:08:46.069 --- 10.0.0.2 ping statistics --- 00:08:46.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.069 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:08:46.069 03:41:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:08:46.069 00:08:46.069 --- 10.0.0.1 ping statistics --- 00:08:46.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.069 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:46.069 03:41:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.069 03:41:04 -- nvmf/common.sh@410 -- # return 0 00:08:46.069 03:41:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.069 03:41:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.069 03:41:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.069 03:41:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.069 03:41:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.069 03:41:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.069 03:41:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.069 03:41:04 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:46.069 03:41:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.069 03:41:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.069 03:41:04 -- common/autotest_common.sh@10 -- # set +x 00:08:46.069 03:41:04 -- nvmf/common.sh@469 -- # nvmfpid=2291334 00:08:46.069 03:41:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.069 03:41:04 -- nvmf/common.sh@470 -- # waitforlisten 2291334 00:08:46.069 03:41:04 -- common/autotest_common.sh@819 -- # '[' -z 2291334 ']' 00:08:46.069 03:41:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.069 03:41:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.069 03:41:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.069 03:41:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.069 03:41:04 -- common/autotest_common.sh@10 -- # set +x 00:08:46.069 [2024-07-14 03:41:04.967229] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:46.069 [2024-07-14 03:41:04.967342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.069 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.327 [2024-07-14 03:41:05.036477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.327 [2024-07-14 03:41:05.124942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.327 [2024-07-14 03:41:05.125115] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.327 [2024-07-14 03:41:05.125134] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.327 [2024-07-14 03:41:05.125155] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.327 [2024-07-14 03:41:05.125206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.327 [2024-07-14 03:41:05.125270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.327 [2024-07-14 03:41:05.125303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.327 [2024-07-14 03:41:05.125304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.317 03:41:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:47.317 03:41:05 -- common/autotest_common.sh@852 -- # return 0 00:08:47.317 03:41:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:47.317 03:41:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:47.317 03:41:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 03:41:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.317 03:41:05 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:47.317 03:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.317 03:41:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 [2024-07-14 03:41:05.957484] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.317 03:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.317 03:41:05 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:47.317 03:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.317 03:41:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 03:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.317 03:41:05 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:47.317 03:41:05 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.317 03:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.317 03:41:05 -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 03:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.317 03:41:06 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.317 03:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.317 03:41:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 03:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.317 03:41:06 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.317 03:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.317 03:41:06 -- common/autotest_common.sh@10 -- # set +x 00:08:47.317 [2024-07-14 03:41:06.014527] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.317 03:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.317 03:41:06 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:47.317 03:41:06 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:47.317 03:41:06 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:47.317 03:41:06 -- target/connect_disconnect.sh@34 -- # set +x 00:08:49.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.639 03:44:58 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:39.639 03:44:58 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:39.639 03:44:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:39.639 03:44:58 -- nvmf/common.sh@116 -- # sync 00:12:39.639 03:44:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:39.639 03:44:58 -- nvmf/common.sh@119 -- # set +e 00:12:39.639 03:44:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:39.639 03:44:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:39.639 rmmod nvme_tcp 00:12:39.639 rmmod nvme_fabrics 00:12:39.639 rmmod nvme_keyring 00:12:39.639 03:44:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:39.639 03:44:58 -- nvmf/common.sh@123 -- # set -e 00:12:39.639 03:44:58 -- nvmf/common.sh@124 -- # return 0 00:12:39.639 03:44:58 -- nvmf/common.sh@477 -- # '[' -n 2291334 ']' 00:12:39.639 03:44:58 -- nvmf/common.sh@478 -- # killprocess 2291334 00:12:39.639 03:44:58 -- common/autotest_common.sh@926 -- # '[' -z 2291334 ']' 00:12:39.639 03:44:58 -- common/autotest_common.sh@930 -- # kill -0 2291334 00:12:39.639 03:44:58 -- common/autotest_common.sh@931 -- # uname 00:12:39.639 03:44:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:39.639 03:44:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2291334 00:12:39.639 03:44:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:39.639 03:44:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:39.639 03:44:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2291334' 00:12:39.639 killing process with pid 2291334 00:12:39.639 03:44:58 -- common/autotest_common.sh@945 -- # kill 2291334 00:12:39.639 03:44:58 -- common/autotest_common.sh@950 -- # wait 2291334 00:12:39.898 03:44:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:39.898 03:44:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:39.898 03:44:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:39.898 03:44:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.898 03:44:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:39.898 03:44:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.898 03:44:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.898 03:44:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.802 03:45:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:41.802 00:12:41.802 real 3m57.870s 00:12:41.802 user 15m6.530s 00:12:41.802 sys 0m35.005s 00:12:41.802 03:45:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.802 03:45:00 -- common/autotest_common.sh@10 -- # set +x 00:12:41.802 ************************************ 00:12:41.802 END TEST nvmf_connect_disconnect 00:12:41.802 ************************************ 00:12:41.802 03:45:00 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:41.802 03:45:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:41.802 03:45:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:41.802 03:45:00 -- common/autotest_common.sh@10 -- # set +x 00:12:41.802 ************************************ 00:12:41.802 START TEST nvmf_multitarget 00:12:41.802 ************************************ 00:12:41.802 03:45:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:41.802 * Looking for test storage... 00:12:41.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.802 03:45:00 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.802 03:45:00 -- nvmf/common.sh@7 -- # uname -s 00:12:41.802 03:45:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.802 03:45:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.802 03:45:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.802 03:45:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.802 03:45:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.802 03:45:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.061 03:45:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.061 03:45:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.061 03:45:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.061 03:45:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.061 03:45:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.061 03:45:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.061 03:45:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.061 03:45:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.061 03:45:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.061 03:45:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.061 03:45:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.061 03:45:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.061 03:45:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.061 03:45:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.061 03:45:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.061 03:45:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.061 03:45:00 -- paths/export.sh@5 -- # export PATH 00:12:42.061 03:45:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.061 03:45:00 -- nvmf/common.sh@46 -- # : 0 00:12:42.061 03:45:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:42.061 03:45:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:42.061 03:45:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:42.061 03:45:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.061 03:45:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.061 03:45:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:42.061 03:45:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:42.061 03:45:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:42.061 03:45:00 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:42.061 03:45:00 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:42.061 03:45:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:42.061 03:45:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.061 03:45:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:42.061 03:45:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:42.061 03:45:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:42.061 03:45:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.061 03:45:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.061 03:45:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.061 03:45:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:42.061 03:45:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:42.061 03:45:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:42.061 03:45:00 -- common/autotest_common.sh@10 -- # set +x 00:12:43.970 03:45:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:43.970 03:45:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:43.970 03:45:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:43.970 03:45:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:43.970 03:45:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:43.970 03:45:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:43.970 03:45:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:43.970 03:45:02 -- nvmf/common.sh@294 -- # net_devs=() 00:12:43.970 03:45:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:43.970 03:45:02 -- nvmf/common.sh@295 -- # e810=() 00:12:43.970 03:45:02 -- nvmf/common.sh@295 -- # local -ga e810 00:12:43.970 03:45:02 -- nvmf/common.sh@296 -- # x722=() 00:12:43.970 03:45:02 -- nvmf/common.sh@296 -- # local -ga x722 00:12:43.970 03:45:02 -- nvmf/common.sh@297 -- # mlx=() 00:12:43.970 03:45:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:43.970 03:45:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.970 03:45:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:43.970 03:45:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:43.970 03:45:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:43.970 03:45:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:43.971 03:45:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:43.971 03:45:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:43.971 03:45:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:43.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:43.971 03:45:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:43.971 03:45:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:43.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:43.971 03:45:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:43.971 03:45:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:43.971 03:45:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.971 03:45:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:43.971 03:45:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.971 03:45:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:43.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:43.971 03:45:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.971 03:45:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:43.971 03:45:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.971 03:45:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:43.971 03:45:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.971 03:45:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:43.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:43.971 03:45:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.971 03:45:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:43.971 03:45:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:43.971 03:45:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:43.971 03:45:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.971 03:45:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.971 03:45:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.971 03:45:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:43.971 03:45:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.971 03:45:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.971 03:45:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:43.971 03:45:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.971 03:45:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.971 03:45:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:43.971 03:45:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:43.971 03:45:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.971 03:45:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.971 03:45:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.971 03:45:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.971 03:45:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:43.971 03:45:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.971 03:45:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.971 03:45:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.971 03:45:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:43.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:12:43.971 00:12:43.971 --- 10.0.0.2 ping statistics --- 00:12:43.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.971 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:12:43.971 03:45:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:43.971 00:12:43.971 --- 10.0.0.1 ping statistics --- 00:12:43.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.971 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:43.971 03:45:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.971 03:45:02 -- nvmf/common.sh@410 -- # return 0 00:12:43.971 03:45:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:43.971 03:45:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.971 03:45:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:43.971 03:45:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.971 03:45:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:43.971 03:45:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:43.971 03:45:02 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:43.971 03:45:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:43.971 03:45:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:43.971 03:45:02 -- common/autotest_common.sh@10 -- # set +x 00:12:43.971 03:45:02 -- nvmf/common.sh@469 -- # nvmfpid=2323614 00:12:43.971 03:45:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.971 03:45:02 -- nvmf/common.sh@470 -- # waitforlisten 2323614 00:12:43.971 03:45:02 -- common/autotest_common.sh@819 -- # '[' -z 2323614 ']' 00:12:43.971 03:45:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.971 03:45:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:43.971 03:45:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.971 03:45:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:43.971 03:45:02 -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 [2024-07-14 03:45:02.936438] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:44.230 [2024-07-14 03:45:02.936523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.230 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.230 [2024-07-14 03:45:03.001125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.230 [2024-07-14 03:45:03.084024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:44.230 [2024-07-14 03:45:03.084193] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.230 [2024-07-14 03:45:03.084211] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.230 [2024-07-14 03:45:03.084223] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.230 [2024-07-14 03:45:03.084274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.230 [2024-07-14 03:45:03.084334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.230 [2024-07-14 03:45:03.084400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.230 [2024-07-14 03:45:03.084402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.167 03:45:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:45.167 03:45:03 -- common/autotest_common.sh@852 -- # return 0 00:12:45.167 03:45:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:45.167 03:45:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:45.167 03:45:03 -- common/autotest_common.sh@10 -- # set +x 00:12:45.167 03:45:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.167 03:45:03 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:45.167 03:45:03 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:45.167 03:45:03 -- target/multitarget.sh@21 -- # jq length 00:12:45.167 03:45:04 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:45.167 03:45:04 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:45.424 "nvmf_tgt_1" 00:12:45.424 03:45:04 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:45.424 "nvmf_tgt_2" 00:12:45.424 03:45:04 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:45.424 03:45:04 -- target/multitarget.sh@28 -- # jq length 00:12:45.682 03:45:04 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:45.682 03:45:04 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:45.682 true 00:12:45.682 03:45:04 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:45.682 true 00:12:45.682 03:45:04 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:45.682 03:45:04 -- target/multitarget.sh@35 -- # jq length 00:12:45.941 03:45:04 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:45.941 03:45:04 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:45.941 03:45:04 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:45.941 03:45:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:45.941 03:45:04 -- nvmf/common.sh@116 -- # sync 00:12:45.941 03:45:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:45.941 03:45:04 -- nvmf/common.sh@119 -- # set +e 00:12:45.941 03:45:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:45.941 03:45:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:45.941 rmmod nvme_tcp 00:12:45.941 rmmod nvme_fabrics 00:12:45.941 rmmod nvme_keyring 00:12:45.941 03:45:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:45.941 03:45:04 -- nvmf/common.sh@123 -- # set -e 00:12:45.941 03:45:04 -- nvmf/common.sh@124 -- # return 0 00:12:45.941 03:45:04 -- nvmf/common.sh@477 -- # '[' -n 2323614 ']' 00:12:45.941 03:45:04 -- nvmf/common.sh@478 -- # killprocess 2323614 00:12:45.941 03:45:04 -- common/autotest_common.sh@926 -- # '[' -z 2323614 ']' 00:12:45.941 03:45:04 -- common/autotest_common.sh@930 -- # kill -0 2323614 00:12:45.941 03:45:04 -- common/autotest_common.sh@931 -- # uname 00:12:45.941 03:45:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:45.941 03:45:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2323614 00:12:45.941 03:45:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:45.941 03:45:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:45.941 03:45:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2323614' 00:12:45.941 killing process with pid 2323614 00:12:45.941 03:45:04 -- common/autotest_common.sh@945 -- # kill 2323614 00:12:45.941 03:45:04 -- common/autotest_common.sh@950 -- # wait 2323614 00:12:46.200 03:45:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:46.200 03:45:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:46.200 03:45:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:46.200 03:45:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.200 03:45:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:46.200 03:45:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.200 03:45:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.200 03:45:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.733 03:45:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:48.733 00:12:48.733 real 0m6.384s 00:12:48.733 user 0m9.379s 00:12:48.733 sys 0m1.965s 00:12:48.733 03:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.733 03:45:07 -- common/autotest_common.sh@10 -- # set +x 00:12:48.733 ************************************ 00:12:48.733 END TEST nvmf_multitarget 00:12:48.733 ************************************ 00:12:48.733 03:45:07 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.733 03:45:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:48.733 03:45:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:48.733 03:45:07 -- common/autotest_common.sh@10 -- # set +x 00:12:48.733 ************************************ 00:12:48.733 START TEST nvmf_rpc 00:12:48.733 ************************************ 00:12:48.733 03:45:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.733 * Looking for test storage... 00:12:48.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.733 03:45:07 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.733 03:45:07 -- nvmf/common.sh@7 -- # uname -s 00:12:48.733 03:45:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.733 03:45:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.733 03:45:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.733 03:45:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.733 03:45:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.733 03:45:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.733 03:45:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.733 03:45:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.733 03:45:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.733 03:45:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.733 03:45:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:48.733 03:45:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:48.733 03:45:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.733 03:45:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.733 03:45:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.733 03:45:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.733 03:45:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.733 03:45:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.733 03:45:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.733 03:45:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.733 03:45:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.733 03:45:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.733 03:45:07 -- paths/export.sh@5 -- # export PATH 00:12:48.733 03:45:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.733 03:45:07 -- nvmf/common.sh@46 -- # : 0 00:12:48.733 03:45:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:48.733 03:45:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:48.733 03:45:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:48.733 03:45:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.733 03:45:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.733 03:45:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:48.733 03:45:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:48.733 03:45:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:48.733 03:45:07 -- target/rpc.sh@11 -- # loops=5 00:12:48.733 03:45:07 -- target/rpc.sh@23 -- # nvmftestinit 00:12:48.733 03:45:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:48.733 03:45:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.733 03:45:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:48.733 03:45:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:48.733 03:45:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:48.733 03:45:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.733 03:45:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.733 03:45:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.733 03:45:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:48.733 03:45:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:48.733 03:45:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:48.733 03:45:07 -- common/autotest_common.sh@10 -- # set +x 00:12:50.636 03:45:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:50.636 03:45:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:50.636 03:45:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:50.636 03:45:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:50.636 03:45:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:50.636 03:45:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:50.637 03:45:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:50.637 03:45:09 -- nvmf/common.sh@294 -- # net_devs=() 00:12:50.637 03:45:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:50.637 03:45:09 -- nvmf/common.sh@295 -- # e810=() 00:12:50.637 03:45:09 -- nvmf/common.sh@295 -- # local -ga e810 00:12:50.637 03:45:09 -- nvmf/common.sh@296 -- # x722=() 00:12:50.637 03:45:09 -- nvmf/common.sh@296 -- # local -ga x722 00:12:50.637 03:45:09 -- nvmf/common.sh@297 -- # mlx=() 00:12:50.637 03:45:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:50.637 03:45:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.637 03:45:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:50.637 03:45:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:50.637 03:45:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:50.637 03:45:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:50.637 03:45:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:50.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:50.637 03:45:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:50.637 03:45:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:50.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:50.637 03:45:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:50.637 03:45:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:50.637 03:45:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.637 03:45:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:50.637 03:45:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.637 03:45:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:50.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:50.637 03:45:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.637 03:45:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:50.637 03:45:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.637 03:45:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:50.637 03:45:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.637 03:45:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:50.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:50.637 03:45:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.637 03:45:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:50.637 03:45:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:50.637 03:45:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:50.637 03:45:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.637 03:45:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.637 03:45:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.637 03:45:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:50.637 03:45:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.637 03:45:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.637 03:45:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:50.637 03:45:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.637 03:45:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.637 03:45:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:50.637 03:45:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:50.637 03:45:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.637 03:45:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.637 03:45:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.637 03:45:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.637 03:45:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:50.637 03:45:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.637 03:45:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.637 03:45:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.637 03:45:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:50.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:12:50.637 00:12:50.637 --- 10.0.0.2 ping statistics --- 00:12:50.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.637 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:50.637 03:45:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:50.637 00:12:50.637 --- 10.0.0.1 ping statistics --- 00:12:50.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.637 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:50.637 03:45:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.637 03:45:09 -- nvmf/common.sh@410 -- # return 0 00:12:50.637 03:45:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:50.637 03:45:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.637 03:45:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:50.637 03:45:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.637 03:45:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:50.637 03:45:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:50.637 03:45:09 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:50.637 03:45:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:50.637 03:45:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:50.637 03:45:09 -- common/autotest_common.sh@10 -- # set +x 00:12:50.637 03:45:09 -- nvmf/common.sh@469 -- # nvmfpid=2326358 00:12:50.637 03:45:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.637 03:45:09 -- nvmf/common.sh@470 -- # waitforlisten 2326358 00:12:50.637 03:45:09 -- common/autotest_common.sh@819 -- # '[' -z 2326358 ']' 00:12:50.637 03:45:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.637 03:45:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:50.637 03:45:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.637 03:45:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:50.637 03:45:09 -- common/autotest_common.sh@10 -- # set +x 00:12:50.637 [2024-07-14 03:45:09.322668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:50.637 [2024-07-14 03:45:09.322743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.637 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.637 [2024-07-14 03:45:09.393394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.637 [2024-07-14 03:45:09.487103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:50.637 [2024-07-14 03:45:09.487260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.637 [2024-07-14 03:45:09.487279] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.637 [2024-07-14 03:45:09.487293] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.637 [2024-07-14 03:45:09.487383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.637 [2024-07-14 03:45:09.487420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.637 [2024-07-14 03:45:09.487474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.637 [2024-07-14 03:45:09.487477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.599 03:45:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:51.599 03:45:10 -- common/autotest_common.sh@852 -- # return 0 00:12:51.599 03:45:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:51.599 03:45:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:51.599 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 03:45:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.599 03:45:10 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:51.599 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.599 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.599 03:45:10 -- target/rpc.sh@26 -- # stats='{ 00:12:51.599 "tick_rate": 2700000000, 00:12:51.599 "poll_groups": [ 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_0", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 0, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 0, 00:12:51.599 "transports": [] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_1", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 0, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 0, 00:12:51.599 "transports": [] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_2", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 0, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 0, 00:12:51.599 "transports": [] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_3", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 0, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 0, 00:12:51.599 "transports": [] 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 }' 00:12:51.599 03:45:10 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:51.599 03:45:10 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:51.599 03:45:10 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:51.599 03:45:10 -- target/rpc.sh@15 -- # wc -l 00:12:51.599 03:45:10 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:51.599 03:45:10 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:51.599 03:45:10 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:51.599 03:45:10 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.599 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.599 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 [2024-07-14 03:45:10.461072] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.599 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.599 03:45:10 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:51.599 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.599 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.599 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.599 03:45:10 -- target/rpc.sh@33 -- # stats='{ 00:12:51.599 "tick_rate": 2700000000, 00:12:51.599 "poll_groups": [ 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_0", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 0, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 0, 00:12:51.599 "transports": [ 00:12:51.599 { 00:12:51.599 "trtype": "TCP" 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_1", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 0, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 0, 00:12:51.599 "transports": [ 00:12:51.599 { 00:12:51.599 "trtype": "TCP" 00:12:51.599 } 00:12:51.599 ] 00:12:51.599 }, 00:12:51.599 { 00:12:51.599 "name": "nvmf_tgt_poll_group_2", 00:12:51.599 "admin_qpairs": 0, 00:12:51.599 "io_qpairs": 0, 00:12:51.599 "current_admin_qpairs": 0, 00:12:51.599 "current_io_qpairs": 0, 00:12:51.599 "pending_bdev_io": 0, 00:12:51.599 "completed_nvme_io": 0, 00:12:51.599 "transports": [ 00:12:51.600 { 00:12:51.600 "trtype": "TCP" 00:12:51.600 } 00:12:51.600 ] 00:12:51.600 }, 00:12:51.600 { 00:12:51.600 "name": "nvmf_tgt_poll_group_3", 00:12:51.600 "admin_qpairs": 0, 00:12:51.600 "io_qpairs": 0, 00:12:51.600 "current_admin_qpairs": 0, 00:12:51.600 "current_io_qpairs": 0, 00:12:51.600 "pending_bdev_io": 0, 00:12:51.600 "completed_nvme_io": 0, 00:12:51.600 "transports": [ 00:12:51.600 { 00:12:51.600 "trtype": "TCP" 00:12:51.600 } 00:12:51.600 ] 00:12:51.600 } 00:12:51.600 ] 00:12:51.600 }' 00:12:51.600 03:45:10 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:51.600 03:45:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:51.600 03:45:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:51.600 03:45:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.864 03:45:10 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:51.864 03:45:10 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:51.864 03:45:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:51.864 03:45:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:51.864 03:45:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.864 03:45:10 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:51.864 03:45:10 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:51.864 03:45:10 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:51.864 03:45:10 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:51.864 03:45:10 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:51.864 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.864 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.864 Malloc1 00:12:51.864 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.864 03:45:10 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.864 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.864 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.864 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.864 03:45:10 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.864 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.864 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.864 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.864 03:45:10 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:51.864 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.864 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.864 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.864 03:45:10 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.864 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.864 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.864 [2024-07-14 03:45:10.622934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.864 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.864 03:45:10 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:51.864 03:45:10 -- common/autotest_common.sh@640 -- # local es=0 00:12:51.864 03:45:10 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:51.864 03:45:10 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:51.864 03:45:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:51.864 03:45:10 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:51.864 03:45:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:51.864 03:45:10 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:51.864 03:45:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:51.864 03:45:10 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:51.864 03:45:10 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:51.864 03:45:10 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:51.864 [2024-07-14 03:45:10.645517] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:51.864 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:51.864 could not add new controller: failed to write to nvme-fabrics device 00:12:51.864 03:45:10 -- common/autotest_common.sh@643 -- # es=1 00:12:51.865 03:45:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:51.865 03:45:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:51.865 03:45:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:51.865 03:45:10 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.865 03:45:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.865 03:45:10 -- common/autotest_common.sh@10 -- # set +x 00:12:51.865 03:45:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.865 03:45:10 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.431 03:45:11 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.431 03:45:11 -- common/autotest_common.sh@1177 -- # local i=0 00:12:52.431 03:45:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.431 03:45:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:52.431 03:45:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:54.970 03:45:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:54.970 03:45:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:54.970 03:45:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.970 03:45:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:54.970 03:45:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.970 03:45:13 -- common/autotest_common.sh@1187 -- # return 0 00:12:54.970 03:45:13 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.970 03:45:13 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.970 03:45:13 -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.970 03:45:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:54.970 03:45:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.970 03:45:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:54.970 03:45:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.970 03:45:13 -- common/autotest_common.sh@1210 -- # return 0 00:12:54.970 03:45:13 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.970 03:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.970 03:45:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.970 03:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.970 03:45:13 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.970 03:45:13 -- common/autotest_common.sh@640 -- # local es=0 00:12:54.970 03:45:13 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.970 03:45:13 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:54.970 03:45:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:54.970 03:45:13 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:54.970 03:45:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:54.970 03:45:13 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:54.970 03:45:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:54.970 03:45:13 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:54.970 03:45:13 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:54.970 03:45:13 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.970 [2024-07-14 03:45:13.527903] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:54.970 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:54.970 could not add new controller: failed to write to nvme-fabrics device 00:12:54.970 03:45:13 -- common/autotest_common.sh@643 -- # es=1 00:12:54.970 03:45:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:54.970 03:45:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:54.970 03:45:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:54.970 03:45:13 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:54.970 03:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.970 03:45:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.970 03:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.970 03:45:13 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.536 03:45:14 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.536 03:45:14 -- common/autotest_common.sh@1177 -- # local i=0 00:12:55.536 03:45:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.536 03:45:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:55.536 03:45:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:57.441 03:45:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:57.441 03:45:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:57.441 03:45:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.441 03:45:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:57.441 03:45:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.441 03:45:16 -- common/autotest_common.sh@1187 -- # return 0 00:12:57.441 03:45:16 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.441 03:45:16 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.441 03:45:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:57.441 03:45:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:57.441 03:45:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.441 03:45:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:57.441 03:45:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.441 03:45:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:57.441 03:45:16 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.441 03:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.441 03:45:16 -- common/autotest_common.sh@10 -- # set +x 00:12:57.441 03:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.442 03:45:16 -- target/rpc.sh@81 -- # seq 1 5 00:12:57.442 03:45:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.442 03:45:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.442 03:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.442 03:45:16 -- common/autotest_common.sh@10 -- # set +x 00:12:57.442 03:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.442 03:45:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.442 03:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.442 03:45:16 -- common/autotest_common.sh@10 -- # set +x 00:12:57.442 [2024-07-14 03:45:16.314923] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.442 03:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.442 03:45:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.442 03:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.442 03:45:16 -- common/autotest_common.sh@10 -- # set +x 00:12:57.442 03:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.442 03:45:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.442 03:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:57.442 03:45:16 -- common/autotest_common.sh@10 -- # set +x 00:12:57.442 03:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:57.442 03:45:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.382 03:45:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.382 03:45:17 -- common/autotest_common.sh@1177 -- # local i=0 00:12:58.382 03:45:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.382 03:45:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:58.382 03:45:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:00.286 03:45:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:00.286 03:45:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:00.286 03:45:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.286 03:45:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:00.287 03:45:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.287 03:45:19 -- common/autotest_common.sh@1187 -- # return 0 00:13:00.287 03:45:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.287 03:45:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.287 03:45:19 -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.287 03:45:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:00.287 03:45:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.287 03:45:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:00.287 03:45:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.287 03:45:19 -- common/autotest_common.sh@1210 -- # return 0 00:13:00.287 03:45:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.287 03:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.287 03:45:19 -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 03:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.287 03:45:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.287 03:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.287 03:45:19 -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 03:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.287 03:45:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.287 03:45:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.287 03:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.287 03:45:19 -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 03:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.287 03:45:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.287 03:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.287 03:45:19 -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 [2024-07-14 03:45:19.131506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.287 03:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.287 03:45:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.287 03:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.287 03:45:19 -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 03:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.287 03:45:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.287 03:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.287 03:45:19 -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 03:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.287 03:45:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.223 03:45:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.223 03:45:19 -- common/autotest_common.sh@1177 -- # local i=0 00:13:01.223 03:45:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.223 03:45:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:01.223 03:45:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:03.129 03:45:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:03.129 03:45:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:03.129 03:45:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.129 03:45:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:03.129 03:45:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.129 03:45:21 -- common/autotest_common.sh@1187 -- # return 0 00:13:03.129 03:45:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.129 03:45:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.129 03:45:21 -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.129 03:45:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:03.129 03:45:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.129 03:45:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:03.129 03:45:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.129 03:45:21 -- common/autotest_common.sh@1210 -- # return 0 00:13:03.129 03:45:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.129 03:45:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.129 03:45:21 -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 03:45:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.129 03:45:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.129 03:45:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.129 03:45:21 -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 03:45:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.129 03:45:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.129 03:45:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.129 03:45:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.129 03:45:21 -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 03:45:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.129 03:45:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.130 03:45:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.130 03:45:21 -- common/autotest_common.sh@10 -- # set +x 00:13:03.130 [2024-07-14 03:45:21.934600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.130 03:45:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.130 03:45:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.130 03:45:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.130 03:45:21 -- common/autotest_common.sh@10 -- # set +x 00:13:03.130 03:45:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.130 03:45:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.130 03:45:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.130 03:45:21 -- common/autotest_common.sh@10 -- # set +x 00:13:03.130 03:45:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.130 03:45:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.699 03:45:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.699 03:45:22 -- common/autotest_common.sh@1177 -- # local i=0 00:13:03.699 03:45:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.699 03:45:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:03.699 03:45:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:06.235 03:45:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:06.235 03:45:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:06.235 03:45:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.235 03:45:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:06.235 03:45:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.235 03:45:24 -- common/autotest_common.sh@1187 -- # return 0 00:13:06.235 03:45:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.235 03:45:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.235 03:45:24 -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.235 03:45:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:06.235 03:45:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.235 03:45:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:06.235 03:45:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.235 03:45:24 -- common/autotest_common.sh@1210 -- # return 0 00:13:06.235 03:45:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.235 03:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.235 03:45:24 -- common/autotest_common.sh@10 -- # set +x 00:13:06.235 03:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.235 03:45:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.235 03:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.235 03:45:24 -- common/autotest_common.sh@10 -- # set +x 00:13:06.235 03:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.235 03:45:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.235 03:45:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.235 03:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.235 03:45:24 -- common/autotest_common.sh@10 -- # set +x 00:13:06.235 03:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.235 03:45:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.235 03:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.235 03:45:24 -- common/autotest_common.sh@10 -- # set +x 00:13:06.235 [2024-07-14 03:45:24.706469] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.235 03:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.235 03:45:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.235 03:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.235 03:45:24 -- common/autotest_common.sh@10 -- # set +x 00:13:06.235 03:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.235 03:45:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.235 03:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.235 03:45:24 -- common/autotest_common.sh@10 -- # set +x 00:13:06.235 03:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.235 03:45:24 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.495 03:45:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.495 03:45:25 -- common/autotest_common.sh@1177 -- # local i=0 00:13:06.495 03:45:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.495 03:45:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:06.495 03:45:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:08.403 03:45:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:08.403 03:45:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:08.403 03:45:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.663 03:45:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:08.663 03:45:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.663 03:45:27 -- common/autotest_common.sh@1187 -- # return 0 00:13:08.663 03:45:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.663 03:45:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.663 03:45:27 -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.663 03:45:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:08.663 03:45:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.663 03:45:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:08.663 03:45:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.663 03:45:27 -- common/autotest_common.sh@1210 -- # return 0 00:13:08.663 03:45:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.663 03:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.663 03:45:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.663 03:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.663 03:45:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.663 03:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.663 03:45:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.663 03:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.663 03:45:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.663 03:45:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.663 03:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.663 03:45:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.663 03:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.663 03:45:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.663 03:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.663 03:45:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.663 [2024-07-14 03:45:27.484475] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.663 03:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.663 03:45:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.663 03:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.663 03:45:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.663 03:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.663 03:45:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.663 03:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.663 03:45:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.663 03:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.663 03:45:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.598 03:45:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.598 03:45:28 -- common/autotest_common.sh@1177 -- # local i=0 00:13:09.598 03:45:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.598 03:45:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:09.598 03:45:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:11.501 03:45:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:11.501 03:45:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:11.501 03:45:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.501 03:45:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:11.501 03:45:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.501 03:45:30 -- common/autotest_common.sh@1187 -- # return 0 00:13:11.501 03:45:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.501 03:45:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.501 03:45:30 -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.501 03:45:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:11.501 03:45:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.501 03:45:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:11.501 03:45:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.501 03:45:30 -- common/autotest_common.sh@1210 -- # return 0 00:13:11.501 03:45:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@99 -- # seq 1 5 00:13:11.501 03:45:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.501 03:45:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 [2024-07-14 03:45:30.309770] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.501 03:45:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 [2024-07-14 03:45:30.357840] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.501 03:45:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 [2024-07-14 03:45:30.406036] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.501 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.501 03:45:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.501 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.501 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.767 03:45:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 [2024-07-14 03:45:30.454209] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.767 03:45:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 [2024-07-14 03:45:30.502371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:11.767 03:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.767 03:45:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.767 03:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.767 03:45:30 -- target/rpc.sh@110 -- # stats='{ 00:13:11.767 "tick_rate": 2700000000, 00:13:11.767 "poll_groups": [ 00:13:11.767 { 00:13:11.767 "name": "nvmf_tgt_poll_group_0", 00:13:11.767 "admin_qpairs": 2, 00:13:11.767 "io_qpairs": 84, 00:13:11.767 "current_admin_qpairs": 0, 00:13:11.767 "current_io_qpairs": 0, 00:13:11.767 "pending_bdev_io": 0, 00:13:11.767 "completed_nvme_io": 232, 00:13:11.767 "transports": [ 00:13:11.767 { 00:13:11.767 "trtype": "TCP" 00:13:11.767 } 00:13:11.767 ] 00:13:11.767 }, 00:13:11.767 { 00:13:11.767 "name": "nvmf_tgt_poll_group_1", 00:13:11.767 "admin_qpairs": 2, 00:13:11.767 "io_qpairs": 84, 00:13:11.767 "current_admin_qpairs": 0, 00:13:11.767 "current_io_qpairs": 0, 00:13:11.767 "pending_bdev_io": 0, 00:13:11.767 "completed_nvme_io": 184, 00:13:11.767 "transports": [ 00:13:11.767 { 00:13:11.767 "trtype": "TCP" 00:13:11.767 } 00:13:11.767 ] 00:13:11.767 }, 00:13:11.767 { 00:13:11.767 "name": "nvmf_tgt_poll_group_2", 00:13:11.767 "admin_qpairs": 1, 00:13:11.767 "io_qpairs": 84, 00:13:11.767 "current_admin_qpairs": 0, 00:13:11.767 "current_io_qpairs": 0, 00:13:11.767 "pending_bdev_io": 0, 00:13:11.767 "completed_nvme_io": 86, 00:13:11.767 "transports": [ 00:13:11.767 { 00:13:11.767 "trtype": "TCP" 00:13:11.767 } 00:13:11.767 ] 00:13:11.767 }, 00:13:11.767 { 00:13:11.767 "name": "nvmf_tgt_poll_group_3", 00:13:11.767 "admin_qpairs": 2, 00:13:11.767 "io_qpairs": 84, 00:13:11.767 "current_admin_qpairs": 0, 00:13:11.767 "current_io_qpairs": 0, 00:13:11.767 "pending_bdev_io": 0, 00:13:11.767 "completed_nvme_io": 184, 00:13:11.767 "transports": [ 00:13:11.767 { 00:13:11.767 "trtype": "TCP" 00:13:11.767 } 00:13:11.767 ] 00:13:11.767 } 00:13:11.767 ] 00:13:11.767 }' 00:13:11.767 03:45:30 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:11.767 03:45:30 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:11.767 03:45:30 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:11.767 03:45:30 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.767 03:45:30 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:11.767 03:45:30 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:11.767 03:45:30 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:11.767 03:45:30 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:11.767 03:45:30 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.767 03:45:30 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:11.767 03:45:30 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:11.767 03:45:30 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:11.767 03:45:30 -- target/rpc.sh@123 -- # nvmftestfini 00:13:11.767 03:45:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:11.767 03:45:30 -- nvmf/common.sh@116 -- # sync 00:13:11.767 03:45:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:11.767 03:45:30 -- nvmf/common.sh@119 -- # set +e 00:13:11.767 03:45:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:11.767 03:45:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:11.767 rmmod nvme_tcp 00:13:11.767 rmmod nvme_fabrics 00:13:11.767 rmmod nvme_keyring 00:13:11.767 03:45:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:11.767 03:45:30 -- nvmf/common.sh@123 -- # set -e 00:13:11.767 03:45:30 -- nvmf/common.sh@124 -- # return 0 00:13:11.767 03:45:30 -- nvmf/common.sh@477 -- # '[' -n 2326358 ']' 00:13:11.767 03:45:30 -- nvmf/common.sh@478 -- # killprocess 2326358 00:13:11.767 03:45:30 -- common/autotest_common.sh@926 -- # '[' -z 2326358 ']' 00:13:11.767 03:45:30 -- common/autotest_common.sh@930 -- # kill -0 2326358 00:13:11.767 03:45:30 -- common/autotest_common.sh@931 -- # uname 00:13:11.767 03:45:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:11.767 03:45:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2326358 00:13:12.072 03:45:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:12.072 03:45:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:12.072 03:45:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2326358' 00:13:12.072 killing process with pid 2326358 00:13:12.072 03:45:30 -- common/autotest_common.sh@945 -- # kill 2326358 00:13:12.072 03:45:30 -- common/autotest_common.sh@950 -- # wait 2326358 00:13:12.072 03:45:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:12.072 03:45:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:12.072 03:45:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:12.072 03:45:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.072 03:45:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:12.072 03:45:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.072 03:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.072 03:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.613 03:45:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:14.613 00:13:14.613 real 0m25.920s 00:13:14.613 user 1m25.346s 00:13:14.613 sys 0m4.034s 00:13:14.613 03:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.613 03:45:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.613 ************************************ 00:13:14.613 END TEST nvmf_rpc 00:13:14.613 ************************************ 00:13:14.613 03:45:33 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:14.613 03:45:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:14.613 03:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:14.613 03:45:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.613 ************************************ 00:13:14.613 START TEST nvmf_invalid 00:13:14.613 ************************************ 00:13:14.613 03:45:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:14.613 * Looking for test storage... 00:13:14.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.613 03:45:33 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.613 03:45:33 -- nvmf/common.sh@7 -- # uname -s 00:13:14.613 03:45:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.613 03:45:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.613 03:45:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.613 03:45:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.613 03:45:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.613 03:45:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.613 03:45:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.613 03:45:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.613 03:45:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.613 03:45:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.613 03:45:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.613 03:45:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.613 03:45:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.613 03:45:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.613 03:45:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.613 03:45:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.613 03:45:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.613 03:45:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.613 03:45:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.613 03:45:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.613 03:45:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.613 03:45:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.613 03:45:33 -- paths/export.sh@5 -- # export PATH 00:13:14.613 03:45:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.613 03:45:33 -- nvmf/common.sh@46 -- # : 0 00:13:14.613 03:45:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:14.613 03:45:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:14.613 03:45:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:14.613 03:45:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.613 03:45:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.613 03:45:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:14.613 03:45:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:14.613 03:45:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:14.613 03:45:33 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:14.613 03:45:33 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.613 03:45:33 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:14.613 03:45:33 -- target/invalid.sh@14 -- # target=foobar 00:13:14.613 03:45:33 -- target/invalid.sh@16 -- # RANDOM=0 00:13:14.613 03:45:33 -- target/invalid.sh@34 -- # nvmftestinit 00:13:14.613 03:45:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:14.613 03:45:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.613 03:45:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:14.613 03:45:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:14.613 03:45:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:14.613 03:45:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.613 03:45:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.613 03:45:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.613 03:45:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:14.613 03:45:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:14.613 03:45:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:14.613 03:45:33 -- common/autotest_common.sh@10 -- # set +x 00:13:16.514 03:45:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.514 03:45:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:16.514 03:45:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:16.514 03:45:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:16.514 03:45:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:16.514 03:45:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:16.514 03:45:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:16.514 03:45:35 -- nvmf/common.sh@294 -- # net_devs=() 00:13:16.514 03:45:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:16.514 03:45:35 -- nvmf/common.sh@295 -- # e810=() 00:13:16.514 03:45:35 -- nvmf/common.sh@295 -- # local -ga e810 00:13:16.514 03:45:35 -- nvmf/common.sh@296 -- # x722=() 00:13:16.514 03:45:35 -- nvmf/common.sh@296 -- # local -ga x722 00:13:16.514 03:45:35 -- nvmf/common.sh@297 -- # mlx=() 00:13:16.514 03:45:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:16.514 03:45:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.514 03:45:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:16.514 03:45:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:16.514 03:45:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:16.514 03:45:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.514 03:45:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:16.514 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:16.514 03:45:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.514 03:45:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:16.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:16.514 03:45:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:16.514 03:45:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.514 03:45:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.514 03:45:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.514 03:45:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.514 03:45:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:16.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:16.514 03:45:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.514 03:45:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.514 03:45:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.514 03:45:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.514 03:45:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.514 03:45:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:16.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:16.514 03:45:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.514 03:45:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:16.514 03:45:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:16.514 03:45:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:16.514 03:45:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.514 03:45:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.514 03:45:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.514 03:45:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:16.514 03:45:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.514 03:45:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.514 03:45:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:16.514 03:45:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.514 03:45:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.514 03:45:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:16.514 03:45:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:16.514 03:45:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.514 03:45:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.514 03:45:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.514 03:45:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.514 03:45:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:16.514 03:45:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.514 03:45:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.514 03:45:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.514 03:45:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:16.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:13:16.514 00:13:16.514 --- 10.0.0.2 ping statistics --- 00:13:16.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.514 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:16.514 03:45:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:13:16.514 00:13:16.514 --- 10.0.0.1 ping statistics --- 00:13:16.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.514 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:16.514 03:45:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.514 03:45:35 -- nvmf/common.sh@410 -- # return 0 00:13:16.514 03:45:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:16.514 03:45:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.514 03:45:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:16.514 03:45:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.514 03:45:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:16.515 03:45:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:16.515 03:45:35 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:16.515 03:45:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:16.515 03:45:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:16.515 03:45:35 -- common/autotest_common.sh@10 -- # set +x 00:13:16.515 03:45:35 -- nvmf/common.sh@469 -- # nvmfpid=2331074 00:13:16.515 03:45:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.515 03:45:35 -- nvmf/common.sh@470 -- # waitforlisten 2331074 00:13:16.515 03:45:35 -- common/autotest_common.sh@819 -- # '[' -z 2331074 ']' 00:13:16.515 03:45:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.515 03:45:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:16.515 03:45:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.515 03:45:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:16.515 03:45:35 -- common/autotest_common.sh@10 -- # set +x 00:13:16.773 [2024-07-14 03:45:35.454129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:16.773 [2024-07-14 03:45:35.454227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.773 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.773 [2024-07-14 03:45:35.532361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.773 [2024-07-14 03:45:35.625935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:16.773 [2024-07-14 03:45:35.626094] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.773 [2024-07-14 03:45:35.626113] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.773 [2024-07-14 03:45:35.626127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.773 [2024-07-14 03:45:35.626207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.773 [2024-07-14 03:45:35.626267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.773 [2024-07-14 03:45:35.626293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.773 [2024-07-14 03:45:35.626297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.709 03:45:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:17.709 03:45:36 -- common/autotest_common.sh@852 -- # return 0 00:13:17.709 03:45:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:17.709 03:45:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:17.709 03:45:36 -- common/autotest_common.sh@10 -- # set +x 00:13:17.709 03:45:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.709 03:45:36 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:17.709 03:45:36 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13395 00:13:17.967 [2024-07-14 03:45:36.673166] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:17.967 03:45:36 -- target/invalid.sh@40 -- # out='request: 00:13:17.967 { 00:13:17.967 "nqn": "nqn.2016-06.io.spdk:cnode13395", 00:13:17.967 "tgt_name": "foobar", 00:13:17.967 "method": "nvmf_create_subsystem", 00:13:17.967 "req_id": 1 00:13:17.967 } 00:13:17.967 Got JSON-RPC error response 00:13:17.967 response: 00:13:17.967 { 00:13:17.967 "code": -32603, 00:13:17.967 "message": "Unable to find target foobar" 00:13:17.967 }' 00:13:17.967 03:45:36 -- target/invalid.sh@41 -- # [[ request: 00:13:17.967 { 00:13:17.967 "nqn": "nqn.2016-06.io.spdk:cnode13395", 00:13:17.967 "tgt_name": "foobar", 00:13:17.967 "method": "nvmf_create_subsystem", 00:13:17.967 "req_id": 1 00:13:17.967 } 00:13:17.967 Got JSON-RPC error response 00:13:17.967 response: 00:13:17.967 { 00:13:17.967 "code": -32603, 00:13:17.967 "message": "Unable to find target foobar" 00:13:17.967 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:17.967 03:45:36 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:17.967 03:45:36 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25725 00:13:18.226 [2024-07-14 03:45:36.934036] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25725: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:18.226 03:45:36 -- target/invalid.sh@45 -- # out='request: 00:13:18.226 { 00:13:18.226 "nqn": "nqn.2016-06.io.spdk:cnode25725", 00:13:18.226 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.226 "method": "nvmf_create_subsystem", 00:13:18.226 "req_id": 1 00:13:18.226 } 00:13:18.226 Got JSON-RPC error response 00:13:18.226 response: 00:13:18.226 { 00:13:18.226 "code": -32602, 00:13:18.226 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.226 }' 00:13:18.226 03:45:36 -- target/invalid.sh@46 -- # [[ request: 00:13:18.226 { 00:13:18.226 "nqn": "nqn.2016-06.io.spdk:cnode25725", 00:13:18.226 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.226 "method": "nvmf_create_subsystem", 00:13:18.226 "req_id": 1 00:13:18.226 } 00:13:18.226 Got JSON-RPC error response 00:13:18.226 response: 00:13:18.226 { 00:13:18.226 "code": -32602, 00:13:18.226 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.226 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:18.226 03:45:36 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:18.226 03:45:36 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15951 00:13:18.485 [2024-07-14 03:45:37.170781] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15951: invalid model number 'SPDK_Controller' 00:13:18.485 03:45:37 -- target/invalid.sh@50 -- # out='request: 00:13:18.485 { 00:13:18.485 "nqn": "nqn.2016-06.io.spdk:cnode15951", 00:13:18.485 "model_number": "SPDK_Controller\u001f", 00:13:18.485 "method": "nvmf_create_subsystem", 00:13:18.485 "req_id": 1 00:13:18.485 } 00:13:18.485 Got JSON-RPC error response 00:13:18.485 response: 00:13:18.485 { 00:13:18.485 "code": -32602, 00:13:18.485 "message": "Invalid MN SPDK_Controller\u001f" 00:13:18.485 }' 00:13:18.485 03:45:37 -- target/invalid.sh@51 -- # [[ request: 00:13:18.485 { 00:13:18.485 "nqn": "nqn.2016-06.io.spdk:cnode15951", 00:13:18.485 "model_number": "SPDK_Controller\u001f", 00:13:18.485 "method": "nvmf_create_subsystem", 00:13:18.485 "req_id": 1 00:13:18.485 } 00:13:18.485 Got JSON-RPC error response 00:13:18.485 response: 00:13:18.485 { 00:13:18.485 "code": -32602, 00:13:18.485 "message": "Invalid MN SPDK_Controller\u001f" 00:13:18.485 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:18.485 03:45:37 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:18.485 03:45:37 -- target/invalid.sh@19 -- # local length=21 ll 00:13:18.485 03:45:37 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:18.485 03:45:37 -- target/invalid.sh@21 -- # local chars 00:13:18.485 03:45:37 -- target/invalid.sh@22 -- # local string 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 112 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=p 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 102 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=f 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 79 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=O 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 70 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=F 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 120 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=x 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 36 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+='$' 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 56 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=8 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 116 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=t 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 102 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=f 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 53 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=5 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 51 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=3 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 103 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=g 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 86 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=V 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 73 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=I 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 68 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=D 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 105 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=i 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 38 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+='&' 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 44 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=, 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 37 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=% 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 118 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=v 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # printf %x 69 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:18.485 03:45:37 -- target/invalid.sh@25 -- # string+=E 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.485 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.485 03:45:37 -- target/invalid.sh@28 -- # [[ p == \- ]] 00:13:18.485 03:45:37 -- target/invalid.sh@31 -- # echo 'pfOFx$8tf53gVIDi&,%vE' 00:13:18.485 03:45:37 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'pfOFx$8tf53gVIDi&,%vE' nqn.2016-06.io.spdk:cnode14269 00:13:18.744 [2024-07-14 03:45:37.483782] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14269: invalid serial number 'pfOFx$8tf53gVIDi&,%vE' 00:13:18.744 03:45:37 -- target/invalid.sh@54 -- # out='request: 00:13:18.744 { 00:13:18.744 "nqn": "nqn.2016-06.io.spdk:cnode14269", 00:13:18.744 "serial_number": "pfOFx$8tf53gVIDi&,%vE", 00:13:18.744 "method": "nvmf_create_subsystem", 00:13:18.744 "req_id": 1 00:13:18.744 } 00:13:18.744 Got JSON-RPC error response 00:13:18.744 response: 00:13:18.744 { 00:13:18.744 "code": -32602, 00:13:18.744 "message": "Invalid SN pfOFx$8tf53gVIDi&,%vE" 00:13:18.744 }' 00:13:18.744 03:45:37 -- target/invalid.sh@55 -- # [[ request: 00:13:18.744 { 00:13:18.744 "nqn": "nqn.2016-06.io.spdk:cnode14269", 00:13:18.744 "serial_number": "pfOFx$8tf53gVIDi&,%vE", 00:13:18.744 "method": "nvmf_create_subsystem", 00:13:18.744 "req_id": 1 00:13:18.744 } 00:13:18.744 Got JSON-RPC error response 00:13:18.744 response: 00:13:18.744 { 00:13:18.744 "code": -32602, 00:13:18.744 "message": "Invalid SN pfOFx$8tf53gVIDi&,%vE" 00:13:18.744 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:18.744 03:45:37 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:18.745 03:45:37 -- target/invalid.sh@19 -- # local length=41 ll 00:13:18.745 03:45:37 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:18.745 03:45:37 -- target/invalid.sh@21 -- # local chars 00:13:18.745 03:45:37 -- target/invalid.sh@22 -- # local string 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 127 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 102 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=f 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 45 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=- 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 112 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=p 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 117 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=u 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 61 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+== 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 55 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=7 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 45 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=- 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 81 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=Q 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 98 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=b 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 44 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=, 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 96 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+='`' 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 66 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=B 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 41 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=')' 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 116 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=t 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 106 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=j 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 104 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=h 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 105 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=i 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 106 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=j 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 65 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=A 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 48 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=0 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 95 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=_ 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 114 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=r 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 78 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=N 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 57 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=9 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 47 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=/ 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 34 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+='"' 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 41 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=')' 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 100 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=d 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 76 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=L 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 67 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=C 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 63 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+='?' 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 38 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+='&' 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # printf %x 89 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:18.745 03:45:37 -- target/invalid.sh@25 -- # string+=Y 00:13:18.745 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # printf %x 123 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # string+='{' 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # printf %x 71 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # string+=G 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # printf %x 120 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # string+=x 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # printf %x 114 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # string+=r 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # printf %x 55 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # string+=7 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # printf %x 78 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # string+=N 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # printf %x 65 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:18.746 03:45:37 -- target/invalid.sh@25 -- # string+=A 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.746 03:45:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.746 03:45:37 -- target/invalid.sh@28 -- # [[  == \- ]] 00:13:18.746 03:45:37 -- target/invalid.sh@31 -- # echo 'f-pu=7-Qb,`B)tjhijA0_rN9/")dLC?&Y{Gxr7NA' 00:13:18.746 03:45:37 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'f-pu=7-Qb,`B)tjhijA0_rN9/")dLC?&Y{Gxr7NA' nqn.2016-06.io.spdk:cnode27896 00:13:19.004 [2024-07-14 03:45:37.865046] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27896: invalid model number 'f-pu=7-Qb,`B)tjhijA0_rN9/")dLC?&Y{Gxr7NA' 00:13:19.004 03:45:37 -- target/invalid.sh@58 -- # out='request: 00:13:19.004 { 00:13:19.004 "nqn": "nqn.2016-06.io.spdk:cnode27896", 00:13:19.004 "model_number": "\u007ff-pu=7-Qb,`B)tjhijA0_rN9/\")dLC?&Y{Gxr7NA", 00:13:19.004 "method": "nvmf_create_subsystem", 00:13:19.004 "req_id": 1 00:13:19.004 } 00:13:19.004 Got JSON-RPC error response 00:13:19.004 response: 00:13:19.004 { 00:13:19.004 "code": -32602, 00:13:19.004 "message": "Invalid MN \u007ff-pu=7-Qb,`B)tjhijA0_rN9/\")dLC?&Y{Gxr7NA" 00:13:19.004 }' 00:13:19.004 03:45:37 -- target/invalid.sh@59 -- # [[ request: 00:13:19.004 { 00:13:19.004 "nqn": "nqn.2016-06.io.spdk:cnode27896", 00:13:19.004 "model_number": "\u007ff-pu=7-Qb,`B)tjhijA0_rN9/\")dLC?&Y{Gxr7NA", 00:13:19.004 "method": "nvmf_create_subsystem", 00:13:19.004 "req_id": 1 00:13:19.004 } 00:13:19.004 Got JSON-RPC error response 00:13:19.004 response: 00:13:19.004 { 00:13:19.004 "code": -32602, 00:13:19.004 "message": "Invalid MN \u007ff-pu=7-Qb,`B)tjhijA0_rN9/\")dLC?&Y{Gxr7NA" 00:13:19.004 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.004 03:45:37 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:19.262 [2024-07-14 03:45:38.097895] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.262 03:45:38 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:19.520 03:45:38 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:19.520 03:45:38 -- target/invalid.sh@67 -- # echo '' 00:13:19.520 03:45:38 -- target/invalid.sh@67 -- # head -n 1 00:13:19.520 03:45:38 -- target/invalid.sh@67 -- # IP= 00:13:19.520 03:45:38 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:19.779 [2024-07-14 03:45:38.599553] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:19.779 03:45:38 -- target/invalid.sh@69 -- # out='request: 00:13:19.779 { 00:13:19.779 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:19.779 "listen_address": { 00:13:19.779 "trtype": "tcp", 00:13:19.779 "traddr": "", 00:13:19.779 "trsvcid": "4421" 00:13:19.779 }, 00:13:19.779 "method": "nvmf_subsystem_remove_listener", 00:13:19.779 "req_id": 1 00:13:19.779 } 00:13:19.779 Got JSON-RPC error response 00:13:19.779 response: 00:13:19.779 { 00:13:19.779 "code": -32602, 00:13:19.779 "message": "Invalid parameters" 00:13:19.779 }' 00:13:19.779 03:45:38 -- target/invalid.sh@70 -- # [[ request: 00:13:19.779 { 00:13:19.779 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:19.779 "listen_address": { 00:13:19.779 "trtype": "tcp", 00:13:19.779 "traddr": "", 00:13:19.779 "trsvcid": "4421" 00:13:19.779 }, 00:13:19.779 "method": "nvmf_subsystem_remove_listener", 00:13:19.779 "req_id": 1 00:13:19.779 } 00:13:19.779 Got JSON-RPC error response 00:13:19.779 response: 00:13:19.779 { 00:13:19.779 "code": -32602, 00:13:19.779 "message": "Invalid parameters" 00:13:19.779 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:19.779 03:45:38 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24351 -i 0 00:13:20.037 [2024-07-14 03:45:38.836382] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24351: invalid cntlid range [0-65519] 00:13:20.037 03:45:38 -- target/invalid.sh@73 -- # out='request: 00:13:20.037 { 00:13:20.037 "nqn": "nqn.2016-06.io.spdk:cnode24351", 00:13:20.037 "min_cntlid": 0, 00:13:20.037 "method": "nvmf_create_subsystem", 00:13:20.037 "req_id": 1 00:13:20.037 } 00:13:20.037 Got JSON-RPC error response 00:13:20.037 response: 00:13:20.037 { 00:13:20.037 "code": -32602, 00:13:20.037 "message": "Invalid cntlid range [0-65519]" 00:13:20.037 }' 00:13:20.037 03:45:38 -- target/invalid.sh@74 -- # [[ request: 00:13:20.037 { 00:13:20.037 "nqn": "nqn.2016-06.io.spdk:cnode24351", 00:13:20.037 "min_cntlid": 0, 00:13:20.037 "method": "nvmf_create_subsystem", 00:13:20.037 "req_id": 1 00:13:20.037 } 00:13:20.037 Got JSON-RPC error response 00:13:20.037 response: 00:13:20.037 { 00:13:20.037 "code": -32602, 00:13:20.037 "message": "Invalid cntlid range [0-65519]" 00:13:20.037 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.037 03:45:38 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31460 -i 65520 00:13:20.295 [2024-07-14 03:45:39.065104] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31460: invalid cntlid range [65520-65519] 00:13:20.295 03:45:39 -- target/invalid.sh@75 -- # out='request: 00:13:20.295 { 00:13:20.295 "nqn": "nqn.2016-06.io.spdk:cnode31460", 00:13:20.295 "min_cntlid": 65520, 00:13:20.295 "method": "nvmf_create_subsystem", 00:13:20.295 "req_id": 1 00:13:20.295 } 00:13:20.295 Got JSON-RPC error response 00:13:20.295 response: 00:13:20.295 { 00:13:20.295 "code": -32602, 00:13:20.295 "message": "Invalid cntlid range [65520-65519]" 00:13:20.295 }' 00:13:20.295 03:45:39 -- target/invalid.sh@76 -- # [[ request: 00:13:20.295 { 00:13:20.295 "nqn": "nqn.2016-06.io.spdk:cnode31460", 00:13:20.295 "min_cntlid": 65520, 00:13:20.295 "method": "nvmf_create_subsystem", 00:13:20.295 "req_id": 1 00:13:20.295 } 00:13:20.295 Got JSON-RPC error response 00:13:20.295 response: 00:13:20.295 { 00:13:20.295 "code": -32602, 00:13:20.295 "message": "Invalid cntlid range [65520-65519]" 00:13:20.295 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.295 03:45:39 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27773 -I 0 00:13:20.554 [2024-07-14 03:45:39.297886] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27773: invalid cntlid range [1-0] 00:13:20.554 03:45:39 -- target/invalid.sh@77 -- # out='request: 00:13:20.554 { 00:13:20.554 "nqn": "nqn.2016-06.io.spdk:cnode27773", 00:13:20.554 "max_cntlid": 0, 00:13:20.554 "method": "nvmf_create_subsystem", 00:13:20.554 "req_id": 1 00:13:20.554 } 00:13:20.554 Got JSON-RPC error response 00:13:20.554 response: 00:13:20.554 { 00:13:20.554 "code": -32602, 00:13:20.554 "message": "Invalid cntlid range [1-0]" 00:13:20.554 }' 00:13:20.554 03:45:39 -- target/invalid.sh@78 -- # [[ request: 00:13:20.554 { 00:13:20.554 "nqn": "nqn.2016-06.io.spdk:cnode27773", 00:13:20.554 "max_cntlid": 0, 00:13:20.554 "method": "nvmf_create_subsystem", 00:13:20.554 "req_id": 1 00:13:20.554 } 00:13:20.554 Got JSON-RPC error response 00:13:20.554 response: 00:13:20.554 { 00:13:20.554 "code": -32602, 00:13:20.554 "message": "Invalid cntlid range [1-0]" 00:13:20.554 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.554 03:45:39 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5630 -I 65520 00:13:20.812 [2024-07-14 03:45:39.554726] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5630: invalid cntlid range [1-65520] 00:13:20.812 03:45:39 -- target/invalid.sh@79 -- # out='request: 00:13:20.812 { 00:13:20.812 "nqn": "nqn.2016-06.io.spdk:cnode5630", 00:13:20.812 "max_cntlid": 65520, 00:13:20.812 "method": "nvmf_create_subsystem", 00:13:20.812 "req_id": 1 00:13:20.812 } 00:13:20.812 Got JSON-RPC error response 00:13:20.812 response: 00:13:20.812 { 00:13:20.812 "code": -32602, 00:13:20.812 "message": "Invalid cntlid range [1-65520]" 00:13:20.812 }' 00:13:20.812 03:45:39 -- target/invalid.sh@80 -- # [[ request: 00:13:20.812 { 00:13:20.812 "nqn": "nqn.2016-06.io.spdk:cnode5630", 00:13:20.812 "max_cntlid": 65520, 00:13:20.812 "method": "nvmf_create_subsystem", 00:13:20.812 "req_id": 1 00:13:20.812 } 00:13:20.812 Got JSON-RPC error response 00:13:20.812 response: 00:13:20.812 { 00:13:20.812 "code": -32602, 00:13:20.812 "message": "Invalid cntlid range [1-65520]" 00:13:20.812 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.812 03:45:39 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1572 -i 6 -I 5 00:13:21.071 [2024-07-14 03:45:39.799556] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1572: invalid cntlid range [6-5] 00:13:21.071 03:45:39 -- target/invalid.sh@83 -- # out='request: 00:13:21.071 { 00:13:21.071 "nqn": "nqn.2016-06.io.spdk:cnode1572", 00:13:21.071 "min_cntlid": 6, 00:13:21.071 "max_cntlid": 5, 00:13:21.071 "method": "nvmf_create_subsystem", 00:13:21.071 "req_id": 1 00:13:21.071 } 00:13:21.071 Got JSON-RPC error response 00:13:21.071 response: 00:13:21.071 { 00:13:21.071 "code": -32602, 00:13:21.071 "message": "Invalid cntlid range [6-5]" 00:13:21.071 }' 00:13:21.071 03:45:39 -- target/invalid.sh@84 -- # [[ request: 00:13:21.071 { 00:13:21.071 "nqn": "nqn.2016-06.io.spdk:cnode1572", 00:13:21.071 "min_cntlid": 6, 00:13:21.071 "max_cntlid": 5, 00:13:21.071 "method": "nvmf_create_subsystem", 00:13:21.071 "req_id": 1 00:13:21.071 } 00:13:21.071 Got JSON-RPC error response 00:13:21.071 response: 00:13:21.071 { 00:13:21.071 "code": -32602, 00:13:21.071 "message": "Invalid cntlid range [6-5]" 00:13:21.071 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.071 03:45:39 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:21.071 03:45:39 -- target/invalid.sh@87 -- # out='request: 00:13:21.071 { 00:13:21.071 "name": "foobar", 00:13:21.071 "method": "nvmf_delete_target", 00:13:21.071 "req_id": 1 00:13:21.071 } 00:13:21.071 Got JSON-RPC error response 00:13:21.071 response: 00:13:21.071 { 00:13:21.071 "code": -32602, 00:13:21.071 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:21.071 }' 00:13:21.071 03:45:39 -- target/invalid.sh@88 -- # [[ request: 00:13:21.071 { 00:13:21.071 "name": "foobar", 00:13:21.071 "method": "nvmf_delete_target", 00:13:21.071 "req_id": 1 00:13:21.071 } 00:13:21.071 Got JSON-RPC error response 00:13:21.071 response: 00:13:21.071 { 00:13:21.071 "code": -32602, 00:13:21.071 "message": "The specified target doesn't exist, cannot delete it." 00:13:21.071 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:21.071 03:45:39 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:21.071 03:45:39 -- target/invalid.sh@91 -- # nvmftestfini 00:13:21.071 03:45:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:21.071 03:45:39 -- nvmf/common.sh@116 -- # sync 00:13:21.071 03:45:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:21.071 03:45:39 -- nvmf/common.sh@119 -- # set +e 00:13:21.071 03:45:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:21.071 03:45:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:21.071 rmmod nvme_tcp 00:13:21.071 rmmod nvme_fabrics 00:13:21.071 rmmod nvme_keyring 00:13:21.071 03:45:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:21.071 03:45:39 -- nvmf/common.sh@123 -- # set -e 00:13:21.071 03:45:39 -- nvmf/common.sh@124 -- # return 0 00:13:21.071 03:45:39 -- nvmf/common.sh@477 -- # '[' -n 2331074 ']' 00:13:21.071 03:45:39 -- nvmf/common.sh@478 -- # killprocess 2331074 00:13:21.071 03:45:39 -- common/autotest_common.sh@926 -- # '[' -z 2331074 ']' 00:13:21.071 03:45:39 -- common/autotest_common.sh@930 -- # kill -0 2331074 00:13:21.071 03:45:39 -- common/autotest_common.sh@931 -- # uname 00:13:21.071 03:45:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:21.071 03:45:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2331074 00:13:21.329 03:45:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:21.329 03:45:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:21.329 03:45:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2331074' 00:13:21.329 killing process with pid 2331074 00:13:21.329 03:45:40 -- common/autotest_common.sh@945 -- # kill 2331074 00:13:21.329 03:45:40 -- common/autotest_common.sh@950 -- # wait 2331074 00:13:21.329 03:45:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:21.329 03:45:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:21.329 03:45:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:21.329 03:45:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.329 03:45:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:21.329 03:45:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.329 03:45:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.329 03:45:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.864 03:45:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:23.864 00:13:23.864 real 0m9.245s 00:13:23.864 user 0m22.141s 00:13:23.864 sys 0m2.564s 00:13:23.864 03:45:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.864 03:45:42 -- common/autotest_common.sh@10 -- # set +x 00:13:23.864 ************************************ 00:13:23.864 END TEST nvmf_invalid 00:13:23.864 ************************************ 00:13:23.864 03:45:42 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:23.864 03:45:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:23.864 03:45:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:23.864 03:45:42 -- common/autotest_common.sh@10 -- # set +x 00:13:23.864 ************************************ 00:13:23.864 START TEST nvmf_abort 00:13:23.864 ************************************ 00:13:23.864 03:45:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:23.864 * Looking for test storage... 00:13:23.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.864 03:45:42 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.864 03:45:42 -- nvmf/common.sh@7 -- # uname -s 00:13:23.864 03:45:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.864 03:45:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.864 03:45:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.864 03:45:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.864 03:45:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.864 03:45:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.864 03:45:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.864 03:45:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.864 03:45:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.864 03:45:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.864 03:45:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.864 03:45:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.864 03:45:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.864 03:45:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.864 03:45:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.864 03:45:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.864 03:45:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.864 03:45:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.864 03:45:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.864 03:45:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.864 03:45:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.864 03:45:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.864 03:45:42 -- paths/export.sh@5 -- # export PATH 00:13:23.864 03:45:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.864 03:45:42 -- nvmf/common.sh@46 -- # : 0 00:13:23.864 03:45:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:23.864 03:45:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:23.864 03:45:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:23.864 03:45:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.864 03:45:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.864 03:45:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:23.864 03:45:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:23.864 03:45:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:23.864 03:45:42 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.864 03:45:42 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:23.864 03:45:42 -- target/abort.sh@14 -- # nvmftestinit 00:13:23.864 03:45:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:23.864 03:45:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.865 03:45:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:23.865 03:45:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:23.865 03:45:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:23.865 03:45:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.865 03:45:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.865 03:45:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.865 03:45:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:23.865 03:45:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:23.865 03:45:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:23.865 03:45:42 -- common/autotest_common.sh@10 -- # set +x 00:13:25.770 03:45:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:25.770 03:45:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:25.770 03:45:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:25.770 03:45:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:25.770 03:45:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:25.770 03:45:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:25.770 03:45:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:25.770 03:45:44 -- nvmf/common.sh@294 -- # net_devs=() 00:13:25.770 03:45:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:25.770 03:45:44 -- nvmf/common.sh@295 -- # e810=() 00:13:25.770 03:45:44 -- nvmf/common.sh@295 -- # local -ga e810 00:13:25.770 03:45:44 -- nvmf/common.sh@296 -- # x722=() 00:13:25.770 03:45:44 -- nvmf/common.sh@296 -- # local -ga x722 00:13:25.770 03:45:44 -- nvmf/common.sh@297 -- # mlx=() 00:13:25.770 03:45:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:25.770 03:45:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.770 03:45:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:25.770 03:45:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:25.770 03:45:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:25.770 03:45:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:25.770 03:45:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:25.770 03:45:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:25.770 03:45:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:25.770 03:45:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:25.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:25.770 03:45:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:25.771 03:45:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:25.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:25.771 03:45:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:25.771 03:45:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:25.771 03:45:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.771 03:45:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:25.771 03:45:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.771 03:45:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:25.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:25.771 03:45:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.771 03:45:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:25.771 03:45:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.771 03:45:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:25.771 03:45:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.771 03:45:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:25.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:25.771 03:45:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.771 03:45:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:25.771 03:45:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:25.771 03:45:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:25.771 03:45:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.771 03:45:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.771 03:45:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.771 03:45:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:25.771 03:45:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.771 03:45:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.771 03:45:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:25.771 03:45:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.771 03:45:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.771 03:45:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:25.771 03:45:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:25.771 03:45:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.771 03:45:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.771 03:45:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.771 03:45:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.771 03:45:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:25.771 03:45:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.771 03:45:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.771 03:45:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.771 03:45:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:25.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:25.771 00:13:25.771 --- 10.0.0.2 ping statistics --- 00:13:25.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.771 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:25.771 03:45:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:25.771 00:13:25.771 --- 10.0.0.1 ping statistics --- 00:13:25.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.771 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:25.771 03:45:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.771 03:45:44 -- nvmf/common.sh@410 -- # return 0 00:13:25.771 03:45:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:25.771 03:45:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.771 03:45:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:25.771 03:45:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.771 03:45:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:25.771 03:45:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:25.771 03:45:44 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:25.771 03:45:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:25.771 03:45:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:25.771 03:45:44 -- common/autotest_common.sh@10 -- # set +x 00:13:25.771 03:45:44 -- nvmf/common.sh@469 -- # nvmfpid=2333742 00:13:25.771 03:45:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.771 03:45:44 -- nvmf/common.sh@470 -- # waitforlisten 2333742 00:13:25.771 03:45:44 -- common/autotest_common.sh@819 -- # '[' -z 2333742 ']' 00:13:25.771 03:45:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.771 03:45:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:25.771 03:45:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.771 03:45:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:25.771 03:45:44 -- common/autotest_common.sh@10 -- # set +x 00:13:25.771 [2024-07-14 03:45:44.495234] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:25.771 [2024-07-14 03:45:44.495316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.771 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.771 [2024-07-14 03:45:44.564855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.771 [2024-07-14 03:45:44.657390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:25.771 [2024-07-14 03:45:44.657557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.771 [2024-07-14 03:45:44.657577] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.771 [2024-07-14 03:45:44.657592] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.771 [2024-07-14 03:45:44.657685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.771 [2024-07-14 03:45:44.657881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.771 [2024-07-14 03:45:44.657887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.710 03:45:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:26.710 03:45:45 -- common/autotest_common.sh@852 -- # return 0 00:13:26.710 03:45:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:26.710 03:45:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 03:45:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.710 03:45:45 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:26.710 03:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 [2024-07-14 03:45:45.446615] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.710 03:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.710 03:45:45 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:26.710 03:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 Malloc0 00:13:26.710 03:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.710 03:45:45 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:26.710 03:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 Delay0 00:13:26.710 03:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.710 03:45:45 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:26.710 03:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 03:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.710 03:45:45 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:26.710 03:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 03:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.710 03:45:45 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:26.710 03:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 [2024-07-14 03:45:45.511510] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.710 03:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.710 03:45:45 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.710 03:45:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.710 03:45:45 -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 03:45:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.710 03:45:45 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:26.710 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.970 [2024-07-14 03:45:45.659036] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:28.873 Initializing NVMe Controllers 00:13:28.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:28.873 controller IO queue size 128 less than required 00:13:28.873 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:28.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:28.873 Initialization complete. Launching workers. 00:13:28.873 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32287 00:13:28.873 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32348, failed to submit 62 00:13:28.873 success 32287, unsuccess 61, failed 0 00:13:28.873 03:45:47 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:28.873 03:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.873 03:45:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 03:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.873 03:45:47 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:28.873 03:45:47 -- target/abort.sh@38 -- # nvmftestfini 00:13:28.873 03:45:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:28.873 03:45:47 -- nvmf/common.sh@116 -- # sync 00:13:28.873 03:45:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:28.873 03:45:47 -- nvmf/common.sh@119 -- # set +e 00:13:28.873 03:45:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:28.873 03:45:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:28.873 rmmod nvme_tcp 00:13:29.133 rmmod nvme_fabrics 00:13:29.133 rmmod nvme_keyring 00:13:29.133 03:45:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:29.133 03:45:47 -- nvmf/common.sh@123 -- # set -e 00:13:29.133 03:45:47 -- nvmf/common.sh@124 -- # return 0 00:13:29.133 03:45:47 -- nvmf/common.sh@477 -- # '[' -n 2333742 ']' 00:13:29.133 03:45:47 -- nvmf/common.sh@478 -- # killprocess 2333742 00:13:29.133 03:45:47 -- common/autotest_common.sh@926 -- # '[' -z 2333742 ']' 00:13:29.133 03:45:47 -- common/autotest_common.sh@930 -- # kill -0 2333742 00:13:29.133 03:45:47 -- common/autotest_common.sh@931 -- # uname 00:13:29.133 03:45:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:29.133 03:45:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2333742 00:13:29.133 03:45:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:29.133 03:45:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:29.133 03:45:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2333742' 00:13:29.133 killing process with pid 2333742 00:13:29.133 03:45:47 -- common/autotest_common.sh@945 -- # kill 2333742 00:13:29.133 03:45:47 -- common/autotest_common.sh@950 -- # wait 2333742 00:13:29.402 03:45:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:29.402 03:45:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:29.402 03:45:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:29.402 03:45:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.402 03:45:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:29.402 03:45:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.402 03:45:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.402 03:45:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.343 03:45:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:31.343 00:13:31.343 real 0m7.844s 00:13:31.343 user 0m12.716s 00:13:31.343 sys 0m2.549s 00:13:31.343 03:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.343 03:45:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.343 ************************************ 00:13:31.343 END TEST nvmf_abort 00:13:31.343 ************************************ 00:13:31.343 03:45:50 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:31.343 03:45:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:31.343 03:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.343 03:45:50 -- common/autotest_common.sh@10 -- # set +x 00:13:31.343 ************************************ 00:13:31.343 START TEST nvmf_ns_hotplug_stress 00:13:31.343 ************************************ 00:13:31.343 03:45:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:31.343 * Looking for test storage... 00:13:31.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.343 03:45:50 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.343 03:45:50 -- nvmf/common.sh@7 -- # uname -s 00:13:31.343 03:45:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.343 03:45:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.343 03:45:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.343 03:45:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.343 03:45:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.343 03:45:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.343 03:45:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.343 03:45:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.343 03:45:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.343 03:45:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.343 03:45:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.343 03:45:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.343 03:45:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.343 03:45:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.343 03:45:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.343 03:45:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.343 03:45:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.343 03:45:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.343 03:45:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.343 03:45:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.343 03:45:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.343 03:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.343 03:45:50 -- paths/export.sh@5 -- # export PATH 00:13:31.343 03:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.343 03:45:50 -- nvmf/common.sh@46 -- # : 0 00:13:31.343 03:45:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:31.343 03:45:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:31.343 03:45:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:31.343 03:45:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.343 03:45:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.343 03:45:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:31.343 03:45:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:31.343 03:45:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:31.343 03:45:50 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.343 03:45:50 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:31.343 03:45:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:31.343 03:45:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.343 03:45:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:31.343 03:45:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:31.343 03:45:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:31.343 03:45:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.343 03:45:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.343 03:45:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.343 03:45:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:31.343 03:45:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:31.343 03:45:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:31.343 03:45:50 -- common/autotest_common.sh@10 -- # set +x 00:13:33.874 03:45:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:33.874 03:45:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:33.874 03:45:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:33.874 03:45:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:33.874 03:45:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:33.874 03:45:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:33.874 03:45:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:33.874 03:45:52 -- nvmf/common.sh@294 -- # net_devs=() 00:13:33.874 03:45:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:33.874 03:45:52 -- nvmf/common.sh@295 -- # e810=() 00:13:33.874 03:45:52 -- nvmf/common.sh@295 -- # local -ga e810 00:13:33.874 03:45:52 -- nvmf/common.sh@296 -- # x722=() 00:13:33.874 03:45:52 -- nvmf/common.sh@296 -- # local -ga x722 00:13:33.874 03:45:52 -- nvmf/common.sh@297 -- # mlx=() 00:13:33.874 03:45:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:33.874 03:45:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.874 03:45:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:33.874 03:45:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:33.874 03:45:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:33.874 03:45:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:33.874 03:45:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:33.874 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:33.874 03:45:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:33.874 03:45:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:33.874 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:33.874 03:45:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:33.874 03:45:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:33.874 03:45:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.874 03:45:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:33.874 03:45:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.874 03:45:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:33.874 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:33.874 03:45:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.874 03:45:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:33.874 03:45:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.874 03:45:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:33.874 03:45:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.874 03:45:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:33.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:33.874 03:45:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.874 03:45:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:33.874 03:45:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:33.874 03:45:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:33.874 03:45:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:33.874 03:45:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.874 03:45:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.874 03:45:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.874 03:45:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:33.874 03:45:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.874 03:45:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.874 03:45:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:33.874 03:45:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.874 03:45:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.875 03:45:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:33.875 03:45:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:33.875 03:45:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.875 03:45:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.875 03:45:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.875 03:45:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.875 03:45:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:33.875 03:45:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.875 03:45:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.875 03:45:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.875 03:45:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:33.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:33.875 00:13:33.875 --- 10.0.0.2 ping statistics --- 00:13:33.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.875 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:33.875 03:45:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:13:33.875 00:13:33.875 --- 10.0.0.1 ping statistics --- 00:13:33.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.875 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:33.875 03:45:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.875 03:45:52 -- nvmf/common.sh@410 -- # return 0 00:13:33.875 03:45:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:33.875 03:45:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.875 03:45:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:33.875 03:45:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:33.875 03:45:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.875 03:45:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:33.875 03:45:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:33.875 03:45:52 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:33.875 03:45:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:33.875 03:45:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:33.875 03:45:52 -- common/autotest_common.sh@10 -- # set +x 00:13:33.875 03:45:52 -- nvmf/common.sh@469 -- # nvmfpid=2336121 00:13:33.875 03:45:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:33.875 03:45:52 -- nvmf/common.sh@470 -- # waitforlisten 2336121 00:13:33.875 03:45:52 -- common/autotest_common.sh@819 -- # '[' -z 2336121 ']' 00:13:33.875 03:45:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.875 03:45:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:33.875 03:45:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.875 03:45:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:33.875 03:45:52 -- common/autotest_common.sh@10 -- # set +x 00:13:33.875 [2024-07-14 03:45:52.541284] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:33.875 [2024-07-14 03:45:52.541373] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.875 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.875 [2024-07-14 03:45:52.622366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:33.875 [2024-07-14 03:45:52.719195] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:33.875 [2024-07-14 03:45:52.719365] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.875 [2024-07-14 03:45:52.719384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.875 [2024-07-14 03:45:52.719398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.875 [2024-07-14 03:45:52.719456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.875 [2024-07-14 03:45:52.719654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.875 [2024-07-14 03:45:52.719658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.809 03:45:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:34.809 03:45:53 -- common/autotest_common.sh@852 -- # return 0 00:13:34.809 03:45:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:34.809 03:45:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:34.809 03:45:53 -- common/autotest_common.sh@10 -- # set +x 00:13:34.809 03:45:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.809 03:45:53 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:34.809 03:45:53 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:35.067 [2024-07-14 03:45:53.756500] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.068 03:45:53 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:35.326 03:45:54 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.326 [2024-07-14 03:45:54.239192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.326 03:45:54 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.583 03:45:54 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:35.840 Malloc0 00:13:36.098 03:45:54 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.098 Delay0 00:13:36.098 03:45:55 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.356 03:45:55 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:36.613 NULL1 00:13:36.613 03:45:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:36.871 03:45:55 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2336561 00:13:36.871 03:45:55 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:36.871 03:45:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:36.871 03:45:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.871 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.245 Read completed with error (sct=0, sc=11) 00:13:38.245 03:45:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.245 03:45:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:38.245 03:45:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:38.503 true 00:13:38.503 03:45:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:38.503 03:45:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.436 03:45:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.694 03:45:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:39.694 03:45:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:39.951 true 00:13:39.951 03:45:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:39.951 03:45:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.208 03:45:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.465 03:45:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:40.465 03:45:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:40.723 true 00:13:40.723 03:45:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:40.723 03:45:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.981 03:45:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.240 03:45:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:41.240 03:45:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:41.240 true 00:13:41.240 03:46:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:41.240 03:46:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 03:46:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.616 03:46:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:42.616 03:46:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:42.874 true 00:13:42.874 03:46:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:42.874 03:46:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.811 03:46:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.069 03:46:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:44.069 03:46:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:44.329 true 00:13:44.330 03:46:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:44.330 03:46:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.587 03:46:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.587 03:46:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:44.587 03:46:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:44.845 true 00:13:44.845 03:46:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:44.845 03:46:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.219 03:46:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.219 03:46:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:46.219 03:46:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:46.476 true 00:13:46.476 03:46:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:46.476 03:46:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.762 03:46:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.019 03:46:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:47.019 03:46:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:47.277 true 00:13:47.277 03:46:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:47.277 03:46:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.535 03:46:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.793 03:46:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:47.793 03:46:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:48.052 true 00:13:48.052 03:46:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:48.052 03:46:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.988 03:46:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.246 03:46:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:49.246 03:46:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:49.504 true 00:13:49.504 03:46:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:49.504 03:46:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.762 03:46:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.020 03:46:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:50.020 03:46:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:50.278 true 00:13:50.278 03:46:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:50.278 03:46:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.536 03:46:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.794 03:46:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:50.795 03:46:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:51.053 true 00:13:51.053 03:46:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:51.053 03:46:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.991 03:46:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.249 03:46:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:52.249 03:46:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:52.507 true 00:13:52.507 03:46:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:52.507 03:46:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.445 03:46:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.703 03:46:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:53.703 03:46:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:53.703 true 00:13:53.703 03:46:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:53.703 03:46:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.960 03:46:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.218 03:46:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:54.218 03:46:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:54.475 true 00:13:54.475 03:46:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:54.475 03:46:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.406 03:46:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.663 03:46:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:55.663 03:46:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:55.921 true 00:13:55.921 03:46:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:55.921 03:46:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.179 03:46:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.437 03:46:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:56.438 03:46:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:56.695 true 00:13:56.695 03:46:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:56.695 03:46:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.633 03:46:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.892 03:46:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:57.892 03:46:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:57.892 true 00:13:58.150 03:46:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:58.150 03:46:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.150 03:46:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.408 03:46:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:58.408 03:46:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:58.666 true 00:13:58.666 03:46:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:13:58.666 03:46:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.601 03:46:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.858 03:46:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:59.858 03:46:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:00.116 true 00:14:00.116 03:46:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:00.116 03:46:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.374 03:46:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.633 03:46:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:00.633 03:46:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:00.891 true 00:14:00.891 03:46:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:00.891 03:46:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.905 03:46:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.905 03:46:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:01.905 03:46:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:02.165 true 00:14:02.165 03:46:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:02.165 03:46:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.422 03:46:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.679 03:46:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:02.679 03:46:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:02.936 true 00:14:02.936 03:46:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:02.936 03:46:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.873 03:46:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.439 03:46:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:04.439 03:46:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:04.439 true 00:14:04.439 03:46:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:04.439 03:46:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.696 03:46:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.953 03:46:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:04.953 03:46:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:05.210 true 00:14:05.210 03:46:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:05.210 03:46:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.467 03:46:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.726 03:46:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:05.726 03:46:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:05.984 true 00:14:05.984 03:46:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:05.984 03:46:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.920 03:46:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.178 Initializing NVMe Controllers 00:14:07.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.178 Controller IO queue size 128, less than required. 00:14:07.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.178 Controller IO queue size 128, less than required. 00:14:07.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:07.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:07.178 Initialization complete. Launching workers. 00:14:07.178 ======================================================== 00:14:07.178 Latency(us) 00:14:07.178 Device Information : IOPS MiB/s Average min max 00:14:07.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1188.13 0.58 57008.58 1830.63 1028299.67 00:14:07.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11716.10 5.72 10925.41 2450.43 361724.56 00:14:07.178 ======================================================== 00:14:07.178 Total : 12904.23 6.30 15168.43 1830.63 1028299.67 00:14:07.178 00:14:07.178 03:46:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:07.178 03:46:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:07.437 true 00:14:07.437 03:46:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2336561 00:14:07.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2336561) - No such process 00:14:07.437 03:46:26 -- target/ns_hotplug_stress.sh@53 -- # wait 2336561 00:14:07.437 03:46:26 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.695 03:46:26 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.953 03:46:26 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:07.953 03:46:26 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:07.953 03:46:26 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:07.953 03:46:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.953 03:46:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:08.210 null0 00:14:08.210 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.210 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.211 03:46:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:08.468 null1 00:14:08.468 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.468 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.469 03:46:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:08.727 null2 00:14:08.727 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.727 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.727 03:46:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:08.985 null3 00:14:08.985 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:08.985 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:08.985 03:46:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:09.243 null4 00:14:09.243 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.243 03:46:27 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.243 03:46:27 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:09.502 null5 00:14:09.502 03:46:28 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.502 03:46:28 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.502 03:46:28 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:09.502 null6 00:14:09.502 03:46:28 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.502 03:46:28 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.502 03:46:28 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:09.761 null7 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@66 -- # wait 2340595 2340596 2340598 2340600 2340602 2340604 2340606 2340608 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.761 03:46:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.019 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.019 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.019 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.019 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.278 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.278 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.278 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.278 03:46:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.278 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.536 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.795 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.053 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.312 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.312 03:46:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.312 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.312 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.312 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.312 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.571 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.829 03:46:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.088 03:46:30 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.346 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.346 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.346 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.346 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.346 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.346 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.346 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:12.347 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:12.605 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:12.863 03:46:31 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.122 03:46:31 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.380 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:13.638 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.638 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:13.639 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.897 03:46:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.155 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.156 03:46:32 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.414 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:14.672 03:46:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.929 03:46:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:15.186 03:46:33 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:15.186 03:46:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:15.186 03:46:33 -- nvmf/common.sh@116 -- # sync 00:14:15.186 03:46:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:15.186 03:46:33 -- nvmf/common.sh@119 -- # set +e 00:14:15.186 03:46:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:15.186 03:46:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:15.186 rmmod nvme_tcp 00:14:15.186 rmmod nvme_fabrics 00:14:15.186 rmmod nvme_keyring 00:14:15.186 03:46:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:15.186 03:46:34 -- nvmf/common.sh@123 -- # set -e 00:14:15.186 03:46:34 -- nvmf/common.sh@124 -- # return 0 00:14:15.186 03:46:34 -- nvmf/common.sh@477 -- # '[' -n 2336121 ']' 00:14:15.186 03:46:34 -- nvmf/common.sh@478 -- # killprocess 2336121 00:14:15.186 03:46:34 -- common/autotest_common.sh@926 -- # '[' -z 2336121 ']' 00:14:15.186 03:46:34 -- common/autotest_common.sh@930 -- # kill -0 2336121 00:14:15.186 03:46:34 -- common/autotest_common.sh@931 -- # uname 00:14:15.186 03:46:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:15.186 03:46:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2336121 00:14:15.186 03:46:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:15.186 03:46:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:15.186 03:46:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2336121' 00:14:15.186 killing process with pid 2336121 00:14:15.186 03:46:34 -- common/autotest_common.sh@945 -- # kill 2336121 00:14:15.186 03:46:34 -- common/autotest_common.sh@950 -- # wait 2336121 00:14:15.445 03:46:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:15.445 03:46:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:15.445 03:46:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:15.445 03:46:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.445 03:46:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:15.445 03:46:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.445 03:46:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.445 03:46:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.384 03:46:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:17.384 00:14:17.384 real 0m46.125s 00:14:17.384 user 3m26.558s 00:14:17.384 sys 0m16.469s 00:14:17.384 03:46:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.384 03:46:36 -- common/autotest_common.sh@10 -- # set +x 00:14:17.384 ************************************ 00:14:17.384 END TEST nvmf_ns_hotplug_stress 00:14:17.384 ************************************ 00:14:17.642 03:46:36 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:17.642 03:46:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:17.642 03:46:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:17.642 03:46:36 -- common/autotest_common.sh@10 -- # set +x 00:14:17.642 ************************************ 00:14:17.642 START TEST nvmf_connect_stress 00:14:17.642 ************************************ 00:14:17.642 03:46:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:17.642 * Looking for test storage... 00:14:17.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.642 03:46:36 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.642 03:46:36 -- nvmf/common.sh@7 -- # uname -s 00:14:17.642 03:46:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.642 03:46:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.642 03:46:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.642 03:46:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.642 03:46:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.642 03:46:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.642 03:46:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.642 03:46:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.642 03:46:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.642 03:46:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.642 03:46:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.642 03:46:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.642 03:46:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.642 03:46:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.642 03:46:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.642 03:46:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.642 03:46:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.642 03:46:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.642 03:46:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.642 03:46:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.643 03:46:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.643 03:46:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.643 03:46:36 -- paths/export.sh@5 -- # export PATH 00:14:17.643 03:46:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.643 03:46:36 -- nvmf/common.sh@46 -- # : 0 00:14:17.643 03:46:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:17.643 03:46:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:17.643 03:46:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:17.643 03:46:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.643 03:46:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.643 03:46:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:17.643 03:46:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:17.643 03:46:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:17.643 03:46:36 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:17.643 03:46:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:17.643 03:46:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.643 03:46:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:17.643 03:46:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:17.643 03:46:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:17.643 03:46:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.643 03:46:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.643 03:46:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.643 03:46:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:17.643 03:46:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:17.643 03:46:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:17.643 03:46:36 -- common/autotest_common.sh@10 -- # set +x 00:14:19.545 03:46:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.545 03:46:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:19.545 03:46:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:19.545 03:46:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:19.545 03:46:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:19.545 03:46:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:19.545 03:46:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:19.545 03:46:38 -- nvmf/common.sh@294 -- # net_devs=() 00:14:19.545 03:46:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:19.545 03:46:38 -- nvmf/common.sh@295 -- # e810=() 00:14:19.545 03:46:38 -- nvmf/common.sh@295 -- # local -ga e810 00:14:19.545 03:46:38 -- nvmf/common.sh@296 -- # x722=() 00:14:19.545 03:46:38 -- nvmf/common.sh@296 -- # local -ga x722 00:14:19.545 03:46:38 -- nvmf/common.sh@297 -- # mlx=() 00:14:19.545 03:46:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:19.545 03:46:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.545 03:46:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:19.545 03:46:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:19.545 03:46:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:19.545 03:46:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.545 03:46:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:19.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:19.545 03:46:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.545 03:46:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:19.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:19.545 03:46:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:19.545 03:46:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.545 03:46:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.545 03:46:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.545 03:46:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.545 03:46:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:19.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:19.545 03:46:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.545 03:46:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.545 03:46:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.545 03:46:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.545 03:46:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.545 03:46:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:19.545 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:19.545 03:46:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.545 03:46:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:19.545 03:46:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:19.545 03:46:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:19.545 03:46:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.545 03:46:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.545 03:46:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.545 03:46:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:19.545 03:46:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.545 03:46:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.545 03:46:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:19.545 03:46:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.545 03:46:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.545 03:46:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:19.545 03:46:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:19.545 03:46:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.545 03:46:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.545 03:46:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.545 03:46:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.545 03:46:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:19.545 03:46:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.545 03:46:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.545 03:46:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.545 03:46:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:19.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:14:19.545 00:14:19.545 --- 10.0.0.2 ping statistics --- 00:14:19.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.545 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:14:19.545 03:46:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:14:19.545 00:14:19.545 --- 10.0.0.1 ping statistics --- 00:14:19.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.545 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:14:19.545 03:46:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.545 03:46:38 -- nvmf/common.sh@410 -- # return 0 00:14:19.545 03:46:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.545 03:46:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.545 03:46:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.545 03:46:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.545 03:46:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.545 03:46:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.805 03:46:38 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:19.805 03:46:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.805 03:46:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.805 03:46:38 -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 03:46:38 -- nvmf/common.sh@469 -- # nvmfpid=2343383 00:14:19.805 03:46:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:19.805 03:46:38 -- nvmf/common.sh@470 -- # waitforlisten 2343383 00:14:19.805 03:46:38 -- common/autotest_common.sh@819 -- # '[' -z 2343383 ']' 00:14:19.805 03:46:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.805 03:46:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.805 03:46:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.805 03:46:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.805 03:46:38 -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 [2024-07-14 03:46:38.544585] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:19.805 [2024-07-14 03:46:38.544668] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.805 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.805 [2024-07-14 03:46:38.615533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.805 [2024-07-14 03:46:38.709663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.805 [2024-07-14 03:46:38.709837] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.805 [2024-07-14 03:46:38.709857] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.805 [2024-07-14 03:46:38.709882] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.805 [2024-07-14 03:46:38.709979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.805 [2024-07-14 03:46:38.711885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.805 [2024-07-14 03:46:38.711889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.743 03:46:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.743 03:46:39 -- common/autotest_common.sh@852 -- # return 0 00:14:20.743 03:46:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.743 03:46:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:20.743 03:46:39 -- common/autotest_common.sh@10 -- # set +x 00:14:20.743 03:46:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.743 03:46:39 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.743 03:46:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.743 03:46:39 -- common/autotest_common.sh@10 -- # set +x 00:14:20.743 [2024-07-14 03:46:39.547629] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.743 03:46:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.743 03:46:39 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:20.743 03:46:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.743 03:46:39 -- common/autotest_common.sh@10 -- # set +x 00:14:20.743 03:46:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.743 03:46:39 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.743 03:46:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.743 03:46:39 -- common/autotest_common.sh@10 -- # set +x 00:14:20.743 [2024-07-14 03:46:39.575994] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.743 03:46:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.743 03:46:39 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:20.743 03:46:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.743 03:46:39 -- common/autotest_common.sh@10 -- # set +x 00:14:20.743 NULL1 00:14:20.743 03:46:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.743 03:46:39 -- target/connect_stress.sh@21 -- # PERF_PID=2343537 00:14:20.743 03:46:39 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:20.743 03:46:39 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:20.743 03:46:39 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.743 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.743 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.744 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.744 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.744 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.744 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.744 03:46:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.744 03:46:39 -- target/connect_stress.sh@28 -- # cat 00:14:20.744 03:46:39 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:20.744 03:46:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.744 03:46:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.744 03:46:39 -- common/autotest_common.sh@10 -- # set +x 00:14:21.309 03:46:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.309 03:46:39 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:21.309 03:46:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.309 03:46:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.310 03:46:39 -- common/autotest_common.sh@10 -- # set +x 00:14:21.567 03:46:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.567 03:46:40 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:21.567 03:46:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.567 03:46:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.567 03:46:40 -- common/autotest_common.sh@10 -- # set +x 00:14:21.826 03:46:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.826 03:46:40 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:21.826 03:46:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.826 03:46:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.826 03:46:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.084 03:46:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.084 03:46:40 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:22.084 03:46:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.084 03:46:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.084 03:46:40 -- common/autotest_common.sh@10 -- # set +x 00:14:22.342 03:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.342 03:46:41 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:22.342 03:46:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.342 03:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.342 03:46:41 -- common/autotest_common.sh@10 -- # set +x 00:14:22.909 03:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.909 03:46:41 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:22.909 03:46:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.909 03:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.909 03:46:41 -- common/autotest_common.sh@10 -- # set +x 00:14:23.168 03:46:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.168 03:46:41 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:23.168 03:46:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.168 03:46:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.168 03:46:41 -- common/autotest_common.sh@10 -- # set +x 00:14:23.426 03:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.426 03:46:42 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:23.426 03:46:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.426 03:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.426 03:46:42 -- common/autotest_common.sh@10 -- # set +x 00:14:23.684 03:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.684 03:46:42 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:23.684 03:46:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.684 03:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.684 03:46:42 -- common/autotest_common.sh@10 -- # set +x 00:14:23.943 03:46:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.944 03:46:42 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:23.944 03:46:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.944 03:46:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.944 03:46:42 -- common/autotest_common.sh@10 -- # set +x 00:14:24.510 03:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.510 03:46:43 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:24.510 03:46:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.510 03:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.510 03:46:43 -- common/autotest_common.sh@10 -- # set +x 00:14:24.767 03:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.767 03:46:43 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:24.767 03:46:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.767 03:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.767 03:46:43 -- common/autotest_common.sh@10 -- # set +x 00:14:25.024 03:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.024 03:46:43 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:25.024 03:46:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.024 03:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.024 03:46:43 -- common/autotest_common.sh@10 -- # set +x 00:14:25.282 03:46:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.282 03:46:44 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:25.282 03:46:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.282 03:46:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.282 03:46:44 -- common/autotest_common.sh@10 -- # set +x 00:14:25.541 03:46:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.541 03:46:44 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:25.541 03:46:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.541 03:46:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.541 03:46:44 -- common/autotest_common.sh@10 -- # set +x 00:14:26.110 03:46:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.110 03:46:44 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:26.110 03:46:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.110 03:46:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.110 03:46:44 -- common/autotest_common.sh@10 -- # set +x 00:14:26.368 03:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.368 03:46:45 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:26.368 03:46:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.368 03:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.368 03:46:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.627 03:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.627 03:46:45 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:26.627 03:46:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.627 03:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.627 03:46:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.887 03:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.887 03:46:45 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:26.887 03:46:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.887 03:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.887 03:46:45 -- common/autotest_common.sh@10 -- # set +x 00:14:27.145 03:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.145 03:46:46 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:27.145 03:46:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.145 03:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.145 03:46:46 -- common/autotest_common.sh@10 -- # set +x 00:14:27.711 03:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.711 03:46:46 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:27.711 03:46:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.711 03:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.711 03:46:46 -- common/autotest_common.sh@10 -- # set +x 00:14:27.970 03:46:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.970 03:46:46 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:27.970 03:46:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.970 03:46:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.970 03:46:46 -- common/autotest_common.sh@10 -- # set +x 00:14:28.228 03:46:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.228 03:46:47 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:28.228 03:46:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.228 03:46:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.228 03:46:47 -- common/autotest_common.sh@10 -- # set +x 00:14:28.488 03:46:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.488 03:46:47 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:28.488 03:46:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.488 03:46:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.488 03:46:47 -- common/autotest_common.sh@10 -- # set +x 00:14:28.748 03:46:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.748 03:46:47 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:28.748 03:46:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.748 03:46:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.748 03:46:47 -- common/autotest_common.sh@10 -- # set +x 00:14:29.317 03:46:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.317 03:46:47 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:29.317 03:46:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.317 03:46:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.317 03:46:47 -- common/autotest_common.sh@10 -- # set +x 00:14:29.577 03:46:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.577 03:46:48 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:29.577 03:46:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.577 03:46:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.577 03:46:48 -- common/autotest_common.sh@10 -- # set +x 00:14:29.838 03:46:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.838 03:46:48 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:29.838 03:46:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.838 03:46:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.838 03:46:48 -- common/autotest_common.sh@10 -- # set +x 00:14:30.097 03:46:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.097 03:46:48 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:30.097 03:46:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.097 03:46:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.097 03:46:48 -- common/autotest_common.sh@10 -- # set +x 00:14:30.356 03:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.356 03:46:49 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:30.356 03:46:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.356 03:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.356 03:46:49 -- common/autotest_common.sh@10 -- # set +x 00:14:30.924 03:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.924 03:46:49 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:30.924 03:46:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.924 03:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.924 03:46:49 -- common/autotest_common.sh@10 -- # set +x 00:14:30.924 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.210 03:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.210 03:46:49 -- target/connect_stress.sh@34 -- # kill -0 2343537 00:14:31.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2343537) - No such process 00:14:31.210 03:46:49 -- target/connect_stress.sh@38 -- # wait 2343537 00:14:31.210 03:46:49 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.210 03:46:49 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:31.210 03:46:49 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:31.210 03:46:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:31.210 03:46:49 -- nvmf/common.sh@116 -- # sync 00:14:31.210 03:46:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:31.210 03:46:49 -- nvmf/common.sh@119 -- # set +e 00:14:31.210 03:46:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:31.210 03:46:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:31.210 rmmod nvme_tcp 00:14:31.210 rmmod nvme_fabrics 00:14:31.210 rmmod nvme_keyring 00:14:31.210 03:46:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:31.210 03:46:49 -- nvmf/common.sh@123 -- # set -e 00:14:31.210 03:46:49 -- nvmf/common.sh@124 -- # return 0 00:14:31.210 03:46:49 -- nvmf/common.sh@477 -- # '[' -n 2343383 ']' 00:14:31.210 03:46:49 -- nvmf/common.sh@478 -- # killprocess 2343383 00:14:31.210 03:46:49 -- common/autotest_common.sh@926 -- # '[' -z 2343383 ']' 00:14:31.210 03:46:49 -- common/autotest_common.sh@930 -- # kill -0 2343383 00:14:31.210 03:46:49 -- common/autotest_common.sh@931 -- # uname 00:14:31.210 03:46:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:31.210 03:46:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2343383 00:14:31.210 03:46:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:31.210 03:46:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:31.210 03:46:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2343383' 00:14:31.210 killing process with pid 2343383 00:14:31.210 03:46:49 -- common/autotest_common.sh@945 -- # kill 2343383 00:14:31.210 03:46:49 -- common/autotest_common.sh@950 -- # wait 2343383 00:14:31.479 03:46:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:31.479 03:46:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:31.479 03:46:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:31.479 03:46:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.479 03:46:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:31.479 03:46:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.479 03:46:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.479 03:46:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.385 03:46:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:33.385 00:14:33.385 real 0m15.912s 00:14:33.385 user 0m40.251s 00:14:33.385 sys 0m6.059s 00:14:33.385 03:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.385 03:46:52 -- common/autotest_common.sh@10 -- # set +x 00:14:33.385 ************************************ 00:14:33.385 END TEST nvmf_connect_stress 00:14:33.385 ************************************ 00:14:33.385 03:46:52 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:33.385 03:46:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:33.385 03:46:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.385 03:46:52 -- common/autotest_common.sh@10 -- # set +x 00:14:33.385 ************************************ 00:14:33.385 START TEST nvmf_fused_ordering 00:14:33.385 ************************************ 00:14:33.385 03:46:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:33.641 * Looking for test storage... 00:14:33.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.641 03:46:52 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.641 03:46:52 -- nvmf/common.sh@7 -- # uname -s 00:14:33.641 03:46:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.641 03:46:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.641 03:46:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.641 03:46:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.641 03:46:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.641 03:46:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.641 03:46:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.641 03:46:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.641 03:46:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.641 03:46:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.642 03:46:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.642 03:46:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.642 03:46:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.642 03:46:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.642 03:46:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.642 03:46:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.642 03:46:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.642 03:46:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.642 03:46:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.642 03:46:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.642 03:46:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.642 03:46:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.642 03:46:52 -- paths/export.sh@5 -- # export PATH 00:14:33.642 03:46:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.642 03:46:52 -- nvmf/common.sh@46 -- # : 0 00:14:33.642 03:46:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:33.642 03:46:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:33.642 03:46:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:33.642 03:46:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.642 03:46:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.642 03:46:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:33.642 03:46:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:33.642 03:46:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:33.642 03:46:52 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:33.642 03:46:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:33.642 03:46:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.642 03:46:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:33.642 03:46:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:33.642 03:46:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:33.642 03:46:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.642 03:46:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.642 03:46:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.642 03:46:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:33.642 03:46:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:33.642 03:46:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:33.642 03:46:52 -- common/autotest_common.sh@10 -- # set +x 00:14:35.546 03:46:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:35.546 03:46:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:35.546 03:46:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:35.546 03:46:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:35.546 03:46:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:35.546 03:46:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:35.546 03:46:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:35.546 03:46:54 -- nvmf/common.sh@294 -- # net_devs=() 00:14:35.546 03:46:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:35.546 03:46:54 -- nvmf/common.sh@295 -- # e810=() 00:14:35.546 03:46:54 -- nvmf/common.sh@295 -- # local -ga e810 00:14:35.546 03:46:54 -- nvmf/common.sh@296 -- # x722=() 00:14:35.546 03:46:54 -- nvmf/common.sh@296 -- # local -ga x722 00:14:35.546 03:46:54 -- nvmf/common.sh@297 -- # mlx=() 00:14:35.546 03:46:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:35.546 03:46:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.546 03:46:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:35.546 03:46:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:35.546 03:46:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:35.546 03:46:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:35.546 03:46:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:35.546 03:46:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:35.546 03:46:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.546 03:46:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:35.546 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:35.546 03:46:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:35.546 03:46:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:35.546 03:46:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.546 03:46:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.547 03:46:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:35.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:35.547 03:46:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:35.547 03:46:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.547 03:46:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.547 03:46:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.547 03:46:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.547 03:46:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:35.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:35.547 03:46:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.547 03:46:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.547 03:46:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.547 03:46:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.547 03:46:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.547 03:46:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:35.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:35.547 03:46:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.547 03:46:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:35.547 03:46:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:35.547 03:46:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:35.547 03:46:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.547 03:46:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.547 03:46:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.547 03:46:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:35.547 03:46:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.547 03:46:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.547 03:46:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:35.547 03:46:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.547 03:46:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.547 03:46:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:35.547 03:46:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:35.547 03:46:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.547 03:46:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.547 03:46:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.547 03:46:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.547 03:46:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:35.547 03:46:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.547 03:46:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.547 03:46:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.547 03:46:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:35.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:14:35.547 00:14:35.547 --- 10.0.0.2 ping statistics --- 00:14:35.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.547 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:35.547 03:46:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:14:35.547 00:14:35.547 --- 10.0.0.1 ping statistics --- 00:14:35.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.547 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:35.547 03:46:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.547 03:46:54 -- nvmf/common.sh@410 -- # return 0 00:14:35.547 03:46:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:35.547 03:46:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.547 03:46:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:35.547 03:46:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.547 03:46:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:35.547 03:46:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:35.547 03:46:54 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:35.547 03:46:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:35.547 03:46:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:35.547 03:46:54 -- common/autotest_common.sh@10 -- # set +x 00:14:35.547 03:46:54 -- nvmf/common.sh@469 -- # nvmfpid=2346727 00:14:35.547 03:46:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:35.547 03:46:54 -- nvmf/common.sh@470 -- # waitforlisten 2346727 00:14:35.547 03:46:54 -- common/autotest_common.sh@819 -- # '[' -z 2346727 ']' 00:14:35.547 03:46:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.547 03:46:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:35.547 03:46:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.547 03:46:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:35.547 03:46:54 -- common/autotest_common.sh@10 -- # set +x 00:14:35.547 [2024-07-14 03:46:54.391120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:35.547 [2024-07-14 03:46:54.391215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.547 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.547 [2024-07-14 03:46:54.461101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.806 [2024-07-14 03:46:54.549269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.806 [2024-07-14 03:46:54.549422] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.806 [2024-07-14 03:46:54.549439] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.806 [2024-07-14 03:46:54.549451] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.806 [2024-07-14 03:46:54.549479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.372 03:46:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:36.372 03:46:55 -- common/autotest_common.sh@852 -- # return 0 00:14:36.372 03:46:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:36.372 03:46:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:36.372 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.632 03:46:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.632 03:46:55 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.632 03:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.632 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.632 [2024-07-14 03:46:55.336005] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.632 03:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.632 03:46:55 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.632 03:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.632 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.632 03:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.632 03:46:55 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.632 03:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.632 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.632 [2024-07-14 03:46:55.352128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.632 03:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.632 03:46:55 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:36.632 03:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.632 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.632 NULL1 00:14:36.632 03:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.632 03:46:55 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:36.632 03:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.632 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.632 03:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.632 03:46:55 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:36.632 03:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.632 03:46:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.632 03:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.632 03:46:55 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:36.632 [2024-07-14 03:46:55.396359] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:36.633 [2024-07-14 03:46:55.396406] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2346891 ] 00:14:36.633 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.201 Attached to nqn.2016-06.io.spdk:cnode1 00:14:37.201 Namespace ID: 1 size: 1GB 00:14:37.201 fused_ordering(0) 00:14:37.201 fused_ordering(1) 00:14:37.201 fused_ordering(2) 00:14:37.201 fused_ordering(3) 00:14:37.201 fused_ordering(4) 00:14:37.201 fused_ordering(5) 00:14:37.201 fused_ordering(6) 00:14:37.201 fused_ordering(7) 00:14:37.201 fused_ordering(8) 00:14:37.201 fused_ordering(9) 00:14:37.201 fused_ordering(10) 00:14:37.201 fused_ordering(11) 00:14:37.201 fused_ordering(12) 00:14:37.201 fused_ordering(13) 00:14:37.201 fused_ordering(14) 00:14:37.201 fused_ordering(15) 00:14:37.201 fused_ordering(16) 00:14:37.201 fused_ordering(17) 00:14:37.201 fused_ordering(18) 00:14:37.201 fused_ordering(19) 00:14:37.201 fused_ordering(20) 00:14:37.201 fused_ordering(21) 00:14:37.201 fused_ordering(22) 00:14:37.201 fused_ordering(23) 00:14:37.201 fused_ordering(24) 00:14:37.201 fused_ordering(25) 00:14:37.201 fused_ordering(26) 00:14:37.201 fused_ordering(27) 00:14:37.201 fused_ordering(28) 00:14:37.201 fused_ordering(29) 00:14:37.201 fused_ordering(30) 00:14:37.201 fused_ordering(31) 00:14:37.201 fused_ordering(32) 00:14:37.201 fused_ordering(33) 00:14:37.201 fused_ordering(34) 00:14:37.201 fused_ordering(35) 00:14:37.201 fused_ordering(36) 00:14:37.201 fused_ordering(37) 00:14:37.201 fused_ordering(38) 00:14:37.201 fused_ordering(39) 00:14:37.201 fused_ordering(40) 00:14:37.201 fused_ordering(41) 00:14:37.201 fused_ordering(42) 00:14:37.201 fused_ordering(43) 00:14:37.201 fused_ordering(44) 00:14:37.201 fused_ordering(45) 00:14:37.201 fused_ordering(46) 00:14:37.201 fused_ordering(47) 00:14:37.201 fused_ordering(48) 00:14:37.201 fused_ordering(49) 00:14:37.201 fused_ordering(50) 00:14:37.201 fused_ordering(51) 00:14:37.201 fused_ordering(52) 00:14:37.201 fused_ordering(53) 00:14:37.201 fused_ordering(54) 00:14:37.201 fused_ordering(55) 00:14:37.201 fused_ordering(56) 00:14:37.201 fused_ordering(57) 00:14:37.201 fused_ordering(58) 00:14:37.201 fused_ordering(59) 00:14:37.201 fused_ordering(60) 00:14:37.201 fused_ordering(61) 00:14:37.201 fused_ordering(62) 00:14:37.201 fused_ordering(63) 00:14:37.201 fused_ordering(64) 00:14:37.201 fused_ordering(65) 00:14:37.201 fused_ordering(66) 00:14:37.201 fused_ordering(67) 00:14:37.201 fused_ordering(68) 00:14:37.201 fused_ordering(69) 00:14:37.201 fused_ordering(70) 00:14:37.201 fused_ordering(71) 00:14:37.201 fused_ordering(72) 00:14:37.201 fused_ordering(73) 00:14:37.201 fused_ordering(74) 00:14:37.201 fused_ordering(75) 00:14:37.201 fused_ordering(76) 00:14:37.201 fused_ordering(77) 00:14:37.201 fused_ordering(78) 00:14:37.201 fused_ordering(79) 00:14:37.201 fused_ordering(80) 00:14:37.201 fused_ordering(81) 00:14:37.201 fused_ordering(82) 00:14:37.201 fused_ordering(83) 00:14:37.201 fused_ordering(84) 00:14:37.201 fused_ordering(85) 00:14:37.201 fused_ordering(86) 00:14:37.201 fused_ordering(87) 00:14:37.201 fused_ordering(88) 00:14:37.201 fused_ordering(89) 00:14:37.201 fused_ordering(90) 00:14:37.201 fused_ordering(91) 00:14:37.201 fused_ordering(92) 00:14:37.201 fused_ordering(93) 00:14:37.201 fused_ordering(94) 00:14:37.201 fused_ordering(95) 00:14:37.201 fused_ordering(96) 00:14:37.201 fused_ordering(97) 00:14:37.201 fused_ordering(98) 00:14:37.201 fused_ordering(99) 00:14:37.201 fused_ordering(100) 00:14:37.201 fused_ordering(101) 00:14:37.201 fused_ordering(102) 00:14:37.201 fused_ordering(103) 00:14:37.201 fused_ordering(104) 00:14:37.201 fused_ordering(105) 00:14:37.201 fused_ordering(106) 00:14:37.201 fused_ordering(107) 00:14:37.201 fused_ordering(108) 00:14:37.201 fused_ordering(109) 00:14:37.201 fused_ordering(110) 00:14:37.201 fused_ordering(111) 00:14:37.201 fused_ordering(112) 00:14:37.201 fused_ordering(113) 00:14:37.201 fused_ordering(114) 00:14:37.201 fused_ordering(115) 00:14:37.201 fused_ordering(116) 00:14:37.201 fused_ordering(117) 00:14:37.201 fused_ordering(118) 00:14:37.201 fused_ordering(119) 00:14:37.201 fused_ordering(120) 00:14:37.201 fused_ordering(121) 00:14:37.201 fused_ordering(122) 00:14:37.201 fused_ordering(123) 00:14:37.201 fused_ordering(124) 00:14:37.201 fused_ordering(125) 00:14:37.201 fused_ordering(126) 00:14:37.201 fused_ordering(127) 00:14:37.201 fused_ordering(128) 00:14:37.201 fused_ordering(129) 00:14:37.201 fused_ordering(130) 00:14:37.201 fused_ordering(131) 00:14:37.201 fused_ordering(132) 00:14:37.201 fused_ordering(133) 00:14:37.201 fused_ordering(134) 00:14:37.201 fused_ordering(135) 00:14:37.201 fused_ordering(136) 00:14:37.201 fused_ordering(137) 00:14:37.201 fused_ordering(138) 00:14:37.201 fused_ordering(139) 00:14:37.201 fused_ordering(140) 00:14:37.201 fused_ordering(141) 00:14:37.201 fused_ordering(142) 00:14:37.201 fused_ordering(143) 00:14:37.201 fused_ordering(144) 00:14:37.201 fused_ordering(145) 00:14:37.201 fused_ordering(146) 00:14:37.201 fused_ordering(147) 00:14:37.201 fused_ordering(148) 00:14:37.201 fused_ordering(149) 00:14:37.201 fused_ordering(150) 00:14:37.201 fused_ordering(151) 00:14:37.201 fused_ordering(152) 00:14:37.201 fused_ordering(153) 00:14:37.201 fused_ordering(154) 00:14:37.201 fused_ordering(155) 00:14:37.201 fused_ordering(156) 00:14:37.201 fused_ordering(157) 00:14:37.201 fused_ordering(158) 00:14:37.201 fused_ordering(159) 00:14:37.201 fused_ordering(160) 00:14:37.201 fused_ordering(161) 00:14:37.201 fused_ordering(162) 00:14:37.201 fused_ordering(163) 00:14:37.201 fused_ordering(164) 00:14:37.201 fused_ordering(165) 00:14:37.201 fused_ordering(166) 00:14:37.201 fused_ordering(167) 00:14:37.201 fused_ordering(168) 00:14:37.201 fused_ordering(169) 00:14:37.201 fused_ordering(170) 00:14:37.201 fused_ordering(171) 00:14:37.201 fused_ordering(172) 00:14:37.201 fused_ordering(173) 00:14:37.201 fused_ordering(174) 00:14:37.201 fused_ordering(175) 00:14:37.201 fused_ordering(176) 00:14:37.201 fused_ordering(177) 00:14:37.201 fused_ordering(178) 00:14:37.201 fused_ordering(179) 00:14:37.201 fused_ordering(180) 00:14:37.201 fused_ordering(181) 00:14:37.201 fused_ordering(182) 00:14:37.201 fused_ordering(183) 00:14:37.201 fused_ordering(184) 00:14:37.201 fused_ordering(185) 00:14:37.201 fused_ordering(186) 00:14:37.201 fused_ordering(187) 00:14:37.201 fused_ordering(188) 00:14:37.201 fused_ordering(189) 00:14:37.201 fused_ordering(190) 00:14:37.201 fused_ordering(191) 00:14:37.201 fused_ordering(192) 00:14:37.201 fused_ordering(193) 00:14:37.201 fused_ordering(194) 00:14:37.201 fused_ordering(195) 00:14:37.201 fused_ordering(196) 00:14:37.201 fused_ordering(197) 00:14:37.201 fused_ordering(198) 00:14:37.201 fused_ordering(199) 00:14:37.201 fused_ordering(200) 00:14:37.201 fused_ordering(201) 00:14:37.201 fused_ordering(202) 00:14:37.201 fused_ordering(203) 00:14:37.201 fused_ordering(204) 00:14:37.201 fused_ordering(205) 00:14:37.770 fused_ordering(206) 00:14:37.770 fused_ordering(207) 00:14:37.770 fused_ordering(208) 00:14:37.770 fused_ordering(209) 00:14:37.770 fused_ordering(210) 00:14:37.770 fused_ordering(211) 00:14:37.770 fused_ordering(212) 00:14:37.770 fused_ordering(213) 00:14:37.770 fused_ordering(214) 00:14:37.770 fused_ordering(215) 00:14:37.770 fused_ordering(216) 00:14:37.770 fused_ordering(217) 00:14:37.770 fused_ordering(218) 00:14:37.770 fused_ordering(219) 00:14:37.770 fused_ordering(220) 00:14:37.770 fused_ordering(221) 00:14:37.770 fused_ordering(222) 00:14:37.770 fused_ordering(223) 00:14:37.770 fused_ordering(224) 00:14:37.770 fused_ordering(225) 00:14:37.770 fused_ordering(226) 00:14:37.770 fused_ordering(227) 00:14:37.770 fused_ordering(228) 00:14:37.770 fused_ordering(229) 00:14:37.770 fused_ordering(230) 00:14:37.770 fused_ordering(231) 00:14:37.770 fused_ordering(232) 00:14:37.770 fused_ordering(233) 00:14:37.770 fused_ordering(234) 00:14:37.770 fused_ordering(235) 00:14:37.770 fused_ordering(236) 00:14:37.770 fused_ordering(237) 00:14:37.770 fused_ordering(238) 00:14:37.770 fused_ordering(239) 00:14:37.770 fused_ordering(240) 00:14:37.770 fused_ordering(241) 00:14:37.770 fused_ordering(242) 00:14:37.770 fused_ordering(243) 00:14:37.770 fused_ordering(244) 00:14:37.771 fused_ordering(245) 00:14:37.771 fused_ordering(246) 00:14:37.771 fused_ordering(247) 00:14:37.771 fused_ordering(248) 00:14:37.771 fused_ordering(249) 00:14:37.771 fused_ordering(250) 00:14:37.771 fused_ordering(251) 00:14:37.771 fused_ordering(252) 00:14:37.771 fused_ordering(253) 00:14:37.771 fused_ordering(254) 00:14:37.771 fused_ordering(255) 00:14:37.771 fused_ordering(256) 00:14:37.771 fused_ordering(257) 00:14:37.771 fused_ordering(258) 00:14:37.771 fused_ordering(259) 00:14:37.771 fused_ordering(260) 00:14:37.771 fused_ordering(261) 00:14:37.771 fused_ordering(262) 00:14:37.771 fused_ordering(263) 00:14:37.771 fused_ordering(264) 00:14:37.771 fused_ordering(265) 00:14:37.771 fused_ordering(266) 00:14:37.771 fused_ordering(267) 00:14:37.771 fused_ordering(268) 00:14:37.771 fused_ordering(269) 00:14:37.771 fused_ordering(270) 00:14:37.771 fused_ordering(271) 00:14:37.771 fused_ordering(272) 00:14:37.771 fused_ordering(273) 00:14:37.771 fused_ordering(274) 00:14:37.771 fused_ordering(275) 00:14:37.771 fused_ordering(276) 00:14:37.771 fused_ordering(277) 00:14:37.771 fused_ordering(278) 00:14:37.771 fused_ordering(279) 00:14:37.771 fused_ordering(280) 00:14:37.771 fused_ordering(281) 00:14:37.771 fused_ordering(282) 00:14:37.771 fused_ordering(283) 00:14:37.771 fused_ordering(284) 00:14:37.771 fused_ordering(285) 00:14:37.771 fused_ordering(286) 00:14:37.771 fused_ordering(287) 00:14:37.771 fused_ordering(288) 00:14:37.771 fused_ordering(289) 00:14:37.771 fused_ordering(290) 00:14:37.771 fused_ordering(291) 00:14:37.771 fused_ordering(292) 00:14:37.771 fused_ordering(293) 00:14:37.771 fused_ordering(294) 00:14:37.771 fused_ordering(295) 00:14:37.771 fused_ordering(296) 00:14:37.771 fused_ordering(297) 00:14:37.771 fused_ordering(298) 00:14:37.771 fused_ordering(299) 00:14:37.771 fused_ordering(300) 00:14:37.771 fused_ordering(301) 00:14:37.771 fused_ordering(302) 00:14:37.771 fused_ordering(303) 00:14:37.771 fused_ordering(304) 00:14:37.771 fused_ordering(305) 00:14:37.771 fused_ordering(306) 00:14:37.771 fused_ordering(307) 00:14:37.771 fused_ordering(308) 00:14:37.771 fused_ordering(309) 00:14:37.771 fused_ordering(310) 00:14:37.771 fused_ordering(311) 00:14:37.771 fused_ordering(312) 00:14:37.771 fused_ordering(313) 00:14:37.771 fused_ordering(314) 00:14:37.771 fused_ordering(315) 00:14:37.771 fused_ordering(316) 00:14:37.771 fused_ordering(317) 00:14:37.771 fused_ordering(318) 00:14:37.771 fused_ordering(319) 00:14:37.771 fused_ordering(320) 00:14:37.771 fused_ordering(321) 00:14:37.771 fused_ordering(322) 00:14:37.771 fused_ordering(323) 00:14:37.771 fused_ordering(324) 00:14:37.771 fused_ordering(325) 00:14:37.771 fused_ordering(326) 00:14:37.771 fused_ordering(327) 00:14:37.771 fused_ordering(328) 00:14:37.771 fused_ordering(329) 00:14:37.771 fused_ordering(330) 00:14:37.771 fused_ordering(331) 00:14:37.771 fused_ordering(332) 00:14:37.771 fused_ordering(333) 00:14:37.771 fused_ordering(334) 00:14:37.771 fused_ordering(335) 00:14:37.771 fused_ordering(336) 00:14:37.771 fused_ordering(337) 00:14:37.771 fused_ordering(338) 00:14:37.771 fused_ordering(339) 00:14:37.771 fused_ordering(340) 00:14:37.771 fused_ordering(341) 00:14:37.771 fused_ordering(342) 00:14:37.771 fused_ordering(343) 00:14:37.771 fused_ordering(344) 00:14:37.771 fused_ordering(345) 00:14:37.771 fused_ordering(346) 00:14:37.771 fused_ordering(347) 00:14:37.771 fused_ordering(348) 00:14:37.771 fused_ordering(349) 00:14:37.771 fused_ordering(350) 00:14:37.771 fused_ordering(351) 00:14:37.771 fused_ordering(352) 00:14:37.771 fused_ordering(353) 00:14:37.771 fused_ordering(354) 00:14:37.771 fused_ordering(355) 00:14:37.771 fused_ordering(356) 00:14:37.771 fused_ordering(357) 00:14:37.771 fused_ordering(358) 00:14:37.771 fused_ordering(359) 00:14:37.771 fused_ordering(360) 00:14:37.771 fused_ordering(361) 00:14:37.771 fused_ordering(362) 00:14:37.771 fused_ordering(363) 00:14:37.771 fused_ordering(364) 00:14:37.771 fused_ordering(365) 00:14:37.771 fused_ordering(366) 00:14:37.771 fused_ordering(367) 00:14:37.771 fused_ordering(368) 00:14:37.771 fused_ordering(369) 00:14:37.771 fused_ordering(370) 00:14:37.771 fused_ordering(371) 00:14:37.771 fused_ordering(372) 00:14:37.771 fused_ordering(373) 00:14:37.771 fused_ordering(374) 00:14:37.771 fused_ordering(375) 00:14:37.771 fused_ordering(376) 00:14:37.771 fused_ordering(377) 00:14:37.771 fused_ordering(378) 00:14:37.771 fused_ordering(379) 00:14:37.771 fused_ordering(380) 00:14:37.771 fused_ordering(381) 00:14:37.771 fused_ordering(382) 00:14:37.771 fused_ordering(383) 00:14:37.771 fused_ordering(384) 00:14:37.771 fused_ordering(385) 00:14:37.771 fused_ordering(386) 00:14:37.771 fused_ordering(387) 00:14:37.771 fused_ordering(388) 00:14:37.771 fused_ordering(389) 00:14:37.771 fused_ordering(390) 00:14:37.771 fused_ordering(391) 00:14:37.771 fused_ordering(392) 00:14:37.771 fused_ordering(393) 00:14:37.771 fused_ordering(394) 00:14:37.771 fused_ordering(395) 00:14:37.771 fused_ordering(396) 00:14:37.771 fused_ordering(397) 00:14:37.771 fused_ordering(398) 00:14:37.771 fused_ordering(399) 00:14:37.771 fused_ordering(400) 00:14:37.771 fused_ordering(401) 00:14:37.771 fused_ordering(402) 00:14:37.771 fused_ordering(403) 00:14:37.771 fused_ordering(404) 00:14:37.771 fused_ordering(405) 00:14:37.771 fused_ordering(406) 00:14:37.771 fused_ordering(407) 00:14:37.771 fused_ordering(408) 00:14:37.771 fused_ordering(409) 00:14:37.771 fused_ordering(410) 00:14:38.708 fused_ordering(411) 00:14:38.708 fused_ordering(412) 00:14:38.708 fused_ordering(413) 00:14:38.708 fused_ordering(414) 00:14:38.708 fused_ordering(415) 00:14:38.708 fused_ordering(416) 00:14:38.708 fused_ordering(417) 00:14:38.708 fused_ordering(418) 00:14:38.708 fused_ordering(419) 00:14:38.708 fused_ordering(420) 00:14:38.708 fused_ordering(421) 00:14:38.708 fused_ordering(422) 00:14:38.708 fused_ordering(423) 00:14:38.708 fused_ordering(424) 00:14:38.708 fused_ordering(425) 00:14:38.708 fused_ordering(426) 00:14:38.708 fused_ordering(427) 00:14:38.708 fused_ordering(428) 00:14:38.708 fused_ordering(429) 00:14:38.708 fused_ordering(430) 00:14:38.708 fused_ordering(431) 00:14:38.708 fused_ordering(432) 00:14:38.708 fused_ordering(433) 00:14:38.708 fused_ordering(434) 00:14:38.708 fused_ordering(435) 00:14:38.708 fused_ordering(436) 00:14:38.708 fused_ordering(437) 00:14:38.708 fused_ordering(438) 00:14:38.708 fused_ordering(439) 00:14:38.708 fused_ordering(440) 00:14:38.708 fused_ordering(441) 00:14:38.708 fused_ordering(442) 00:14:38.708 fused_ordering(443) 00:14:38.708 fused_ordering(444) 00:14:38.708 fused_ordering(445) 00:14:38.708 fused_ordering(446) 00:14:38.708 fused_ordering(447) 00:14:38.708 fused_ordering(448) 00:14:38.708 fused_ordering(449) 00:14:38.708 fused_ordering(450) 00:14:38.708 fused_ordering(451) 00:14:38.708 fused_ordering(452) 00:14:38.708 fused_ordering(453) 00:14:38.708 fused_ordering(454) 00:14:38.708 fused_ordering(455) 00:14:38.708 fused_ordering(456) 00:14:38.708 fused_ordering(457) 00:14:38.708 fused_ordering(458) 00:14:38.708 fused_ordering(459) 00:14:38.708 fused_ordering(460) 00:14:38.708 fused_ordering(461) 00:14:38.708 fused_ordering(462) 00:14:38.708 fused_ordering(463) 00:14:38.708 fused_ordering(464) 00:14:38.708 fused_ordering(465) 00:14:38.708 fused_ordering(466) 00:14:38.708 fused_ordering(467) 00:14:38.708 fused_ordering(468) 00:14:38.708 fused_ordering(469) 00:14:38.708 fused_ordering(470) 00:14:38.708 fused_ordering(471) 00:14:38.708 fused_ordering(472) 00:14:38.708 fused_ordering(473) 00:14:38.708 fused_ordering(474) 00:14:38.708 fused_ordering(475) 00:14:38.708 fused_ordering(476) 00:14:38.708 fused_ordering(477) 00:14:38.708 fused_ordering(478) 00:14:38.708 fused_ordering(479) 00:14:38.708 fused_ordering(480) 00:14:38.708 fused_ordering(481) 00:14:38.708 fused_ordering(482) 00:14:38.708 fused_ordering(483) 00:14:38.708 fused_ordering(484) 00:14:38.708 fused_ordering(485) 00:14:38.708 fused_ordering(486) 00:14:38.708 fused_ordering(487) 00:14:38.708 fused_ordering(488) 00:14:38.708 fused_ordering(489) 00:14:38.708 fused_ordering(490) 00:14:38.708 fused_ordering(491) 00:14:38.708 fused_ordering(492) 00:14:38.708 fused_ordering(493) 00:14:38.708 fused_ordering(494) 00:14:38.708 fused_ordering(495) 00:14:38.708 fused_ordering(496) 00:14:38.708 fused_ordering(497) 00:14:38.708 fused_ordering(498) 00:14:38.708 fused_ordering(499) 00:14:38.708 fused_ordering(500) 00:14:38.708 fused_ordering(501) 00:14:38.708 fused_ordering(502) 00:14:38.708 fused_ordering(503) 00:14:38.708 fused_ordering(504) 00:14:38.708 fused_ordering(505) 00:14:38.708 fused_ordering(506) 00:14:38.708 fused_ordering(507) 00:14:38.708 fused_ordering(508) 00:14:38.708 fused_ordering(509) 00:14:38.708 fused_ordering(510) 00:14:38.708 fused_ordering(511) 00:14:38.708 fused_ordering(512) 00:14:38.708 fused_ordering(513) 00:14:38.708 fused_ordering(514) 00:14:38.708 fused_ordering(515) 00:14:38.708 fused_ordering(516) 00:14:38.708 fused_ordering(517) 00:14:38.708 fused_ordering(518) 00:14:38.708 fused_ordering(519) 00:14:38.708 fused_ordering(520) 00:14:38.708 fused_ordering(521) 00:14:38.708 fused_ordering(522) 00:14:38.708 fused_ordering(523) 00:14:38.708 fused_ordering(524) 00:14:38.708 fused_ordering(525) 00:14:38.708 fused_ordering(526) 00:14:38.708 fused_ordering(527) 00:14:38.708 fused_ordering(528) 00:14:38.708 fused_ordering(529) 00:14:38.708 fused_ordering(530) 00:14:38.708 fused_ordering(531) 00:14:38.708 fused_ordering(532) 00:14:38.709 fused_ordering(533) 00:14:38.709 fused_ordering(534) 00:14:38.709 fused_ordering(535) 00:14:38.709 fused_ordering(536) 00:14:38.709 fused_ordering(537) 00:14:38.709 fused_ordering(538) 00:14:38.709 fused_ordering(539) 00:14:38.709 fused_ordering(540) 00:14:38.709 fused_ordering(541) 00:14:38.709 fused_ordering(542) 00:14:38.709 fused_ordering(543) 00:14:38.709 fused_ordering(544) 00:14:38.709 fused_ordering(545) 00:14:38.709 fused_ordering(546) 00:14:38.709 fused_ordering(547) 00:14:38.709 fused_ordering(548) 00:14:38.709 fused_ordering(549) 00:14:38.709 fused_ordering(550) 00:14:38.709 fused_ordering(551) 00:14:38.709 fused_ordering(552) 00:14:38.709 fused_ordering(553) 00:14:38.709 fused_ordering(554) 00:14:38.709 fused_ordering(555) 00:14:38.709 fused_ordering(556) 00:14:38.709 fused_ordering(557) 00:14:38.709 fused_ordering(558) 00:14:38.709 fused_ordering(559) 00:14:38.709 fused_ordering(560) 00:14:38.709 fused_ordering(561) 00:14:38.709 fused_ordering(562) 00:14:38.709 fused_ordering(563) 00:14:38.709 fused_ordering(564) 00:14:38.709 fused_ordering(565) 00:14:38.709 fused_ordering(566) 00:14:38.709 fused_ordering(567) 00:14:38.709 fused_ordering(568) 00:14:38.709 fused_ordering(569) 00:14:38.709 fused_ordering(570) 00:14:38.709 fused_ordering(571) 00:14:38.709 fused_ordering(572) 00:14:38.709 fused_ordering(573) 00:14:38.709 fused_ordering(574) 00:14:38.709 fused_ordering(575) 00:14:38.709 fused_ordering(576) 00:14:38.709 fused_ordering(577) 00:14:38.709 fused_ordering(578) 00:14:38.709 fused_ordering(579) 00:14:38.709 fused_ordering(580) 00:14:38.709 fused_ordering(581) 00:14:38.709 fused_ordering(582) 00:14:38.709 fused_ordering(583) 00:14:38.709 fused_ordering(584) 00:14:38.709 fused_ordering(585) 00:14:38.709 fused_ordering(586) 00:14:38.709 fused_ordering(587) 00:14:38.709 fused_ordering(588) 00:14:38.709 fused_ordering(589) 00:14:38.709 fused_ordering(590) 00:14:38.709 fused_ordering(591) 00:14:38.709 fused_ordering(592) 00:14:38.709 fused_ordering(593) 00:14:38.709 fused_ordering(594) 00:14:38.709 fused_ordering(595) 00:14:38.709 fused_ordering(596) 00:14:38.709 fused_ordering(597) 00:14:38.709 fused_ordering(598) 00:14:38.709 fused_ordering(599) 00:14:38.709 fused_ordering(600) 00:14:38.709 fused_ordering(601) 00:14:38.709 fused_ordering(602) 00:14:38.709 fused_ordering(603) 00:14:38.709 fused_ordering(604) 00:14:38.709 fused_ordering(605) 00:14:38.709 fused_ordering(606) 00:14:38.709 fused_ordering(607) 00:14:38.709 fused_ordering(608) 00:14:38.709 fused_ordering(609) 00:14:38.709 fused_ordering(610) 00:14:38.709 fused_ordering(611) 00:14:38.709 fused_ordering(612) 00:14:38.709 fused_ordering(613) 00:14:38.709 fused_ordering(614) 00:14:38.709 fused_ordering(615) 00:14:39.277 fused_ordering(616) 00:14:39.277 fused_ordering(617) 00:14:39.277 fused_ordering(618) 00:14:39.277 fused_ordering(619) 00:14:39.277 fused_ordering(620) 00:14:39.277 fused_ordering(621) 00:14:39.277 fused_ordering(622) 00:14:39.277 fused_ordering(623) 00:14:39.277 fused_ordering(624) 00:14:39.277 fused_ordering(625) 00:14:39.277 fused_ordering(626) 00:14:39.277 fused_ordering(627) 00:14:39.277 fused_ordering(628) 00:14:39.277 fused_ordering(629) 00:14:39.277 fused_ordering(630) 00:14:39.277 fused_ordering(631) 00:14:39.277 fused_ordering(632) 00:14:39.277 fused_ordering(633) 00:14:39.277 fused_ordering(634) 00:14:39.277 fused_ordering(635) 00:14:39.277 fused_ordering(636) 00:14:39.277 fused_ordering(637) 00:14:39.277 fused_ordering(638) 00:14:39.277 fused_ordering(639) 00:14:39.277 fused_ordering(640) 00:14:39.277 fused_ordering(641) 00:14:39.277 fused_ordering(642) 00:14:39.277 fused_ordering(643) 00:14:39.277 fused_ordering(644) 00:14:39.277 fused_ordering(645) 00:14:39.277 fused_ordering(646) 00:14:39.277 fused_ordering(647) 00:14:39.277 fused_ordering(648) 00:14:39.277 fused_ordering(649) 00:14:39.277 fused_ordering(650) 00:14:39.277 fused_ordering(651) 00:14:39.277 fused_ordering(652) 00:14:39.277 fused_ordering(653) 00:14:39.277 fused_ordering(654) 00:14:39.277 fused_ordering(655) 00:14:39.277 fused_ordering(656) 00:14:39.277 fused_ordering(657) 00:14:39.277 fused_ordering(658) 00:14:39.277 fused_ordering(659) 00:14:39.277 fused_ordering(660) 00:14:39.277 fused_ordering(661) 00:14:39.277 fused_ordering(662) 00:14:39.277 fused_ordering(663) 00:14:39.277 fused_ordering(664) 00:14:39.277 fused_ordering(665) 00:14:39.277 fused_ordering(666) 00:14:39.277 fused_ordering(667) 00:14:39.277 fused_ordering(668) 00:14:39.277 fused_ordering(669) 00:14:39.277 fused_ordering(670) 00:14:39.277 fused_ordering(671) 00:14:39.277 fused_ordering(672) 00:14:39.277 fused_ordering(673) 00:14:39.277 fused_ordering(674) 00:14:39.277 fused_ordering(675) 00:14:39.277 fused_ordering(676) 00:14:39.277 fused_ordering(677) 00:14:39.277 fused_ordering(678) 00:14:39.277 fused_ordering(679) 00:14:39.277 fused_ordering(680) 00:14:39.277 fused_ordering(681) 00:14:39.277 fused_ordering(682) 00:14:39.277 fused_ordering(683) 00:14:39.277 fused_ordering(684) 00:14:39.277 fused_ordering(685) 00:14:39.277 fused_ordering(686) 00:14:39.277 fused_ordering(687) 00:14:39.277 fused_ordering(688) 00:14:39.277 fused_ordering(689) 00:14:39.277 fused_ordering(690) 00:14:39.277 fused_ordering(691) 00:14:39.277 fused_ordering(692) 00:14:39.277 fused_ordering(693) 00:14:39.277 fused_ordering(694) 00:14:39.277 fused_ordering(695) 00:14:39.277 fused_ordering(696) 00:14:39.277 fused_ordering(697) 00:14:39.277 fused_ordering(698) 00:14:39.277 fused_ordering(699) 00:14:39.277 fused_ordering(700) 00:14:39.277 fused_ordering(701) 00:14:39.277 fused_ordering(702) 00:14:39.277 fused_ordering(703) 00:14:39.277 fused_ordering(704) 00:14:39.277 fused_ordering(705) 00:14:39.277 fused_ordering(706) 00:14:39.277 fused_ordering(707) 00:14:39.277 fused_ordering(708) 00:14:39.277 fused_ordering(709) 00:14:39.277 fused_ordering(710) 00:14:39.277 fused_ordering(711) 00:14:39.277 fused_ordering(712) 00:14:39.277 fused_ordering(713) 00:14:39.277 fused_ordering(714) 00:14:39.277 fused_ordering(715) 00:14:39.277 fused_ordering(716) 00:14:39.277 fused_ordering(717) 00:14:39.277 fused_ordering(718) 00:14:39.277 fused_ordering(719) 00:14:39.277 fused_ordering(720) 00:14:39.277 fused_ordering(721) 00:14:39.277 fused_ordering(722) 00:14:39.277 fused_ordering(723) 00:14:39.277 fused_ordering(724) 00:14:39.277 fused_ordering(725) 00:14:39.277 fused_ordering(726) 00:14:39.277 fused_ordering(727) 00:14:39.277 fused_ordering(728) 00:14:39.277 fused_ordering(729) 00:14:39.277 fused_ordering(730) 00:14:39.277 fused_ordering(731) 00:14:39.277 fused_ordering(732) 00:14:39.277 fused_ordering(733) 00:14:39.277 fused_ordering(734) 00:14:39.277 fused_ordering(735) 00:14:39.277 fused_ordering(736) 00:14:39.277 fused_ordering(737) 00:14:39.277 fused_ordering(738) 00:14:39.277 fused_ordering(739) 00:14:39.277 fused_ordering(740) 00:14:39.277 fused_ordering(741) 00:14:39.277 fused_ordering(742) 00:14:39.277 fused_ordering(743) 00:14:39.278 fused_ordering(744) 00:14:39.278 fused_ordering(745) 00:14:39.278 fused_ordering(746) 00:14:39.278 fused_ordering(747) 00:14:39.278 fused_ordering(748) 00:14:39.278 fused_ordering(749) 00:14:39.278 fused_ordering(750) 00:14:39.278 fused_ordering(751) 00:14:39.278 fused_ordering(752) 00:14:39.278 fused_ordering(753) 00:14:39.278 fused_ordering(754) 00:14:39.278 fused_ordering(755) 00:14:39.278 fused_ordering(756) 00:14:39.278 fused_ordering(757) 00:14:39.278 fused_ordering(758) 00:14:39.278 fused_ordering(759) 00:14:39.278 fused_ordering(760) 00:14:39.278 fused_ordering(761) 00:14:39.278 fused_ordering(762) 00:14:39.278 fused_ordering(763) 00:14:39.278 fused_ordering(764) 00:14:39.278 fused_ordering(765) 00:14:39.278 fused_ordering(766) 00:14:39.278 fused_ordering(767) 00:14:39.278 fused_ordering(768) 00:14:39.278 fused_ordering(769) 00:14:39.278 fused_ordering(770) 00:14:39.278 fused_ordering(771) 00:14:39.278 fused_ordering(772) 00:14:39.278 fused_ordering(773) 00:14:39.278 fused_ordering(774) 00:14:39.278 fused_ordering(775) 00:14:39.278 fused_ordering(776) 00:14:39.278 fused_ordering(777) 00:14:39.278 fused_ordering(778) 00:14:39.278 fused_ordering(779) 00:14:39.278 fused_ordering(780) 00:14:39.278 fused_ordering(781) 00:14:39.278 fused_ordering(782) 00:14:39.278 fused_ordering(783) 00:14:39.278 fused_ordering(784) 00:14:39.278 fused_ordering(785) 00:14:39.278 fused_ordering(786) 00:14:39.278 fused_ordering(787) 00:14:39.278 fused_ordering(788) 00:14:39.278 fused_ordering(789) 00:14:39.278 fused_ordering(790) 00:14:39.278 fused_ordering(791) 00:14:39.278 fused_ordering(792) 00:14:39.278 fused_ordering(793) 00:14:39.278 fused_ordering(794) 00:14:39.278 fused_ordering(795) 00:14:39.278 fused_ordering(796) 00:14:39.278 fused_ordering(797) 00:14:39.278 fused_ordering(798) 00:14:39.278 fused_ordering(799) 00:14:39.278 fused_ordering(800) 00:14:39.278 fused_ordering(801) 00:14:39.278 fused_ordering(802) 00:14:39.278 fused_ordering(803) 00:14:39.278 fused_ordering(804) 00:14:39.278 fused_ordering(805) 00:14:39.278 fused_ordering(806) 00:14:39.278 fused_ordering(807) 00:14:39.278 fused_ordering(808) 00:14:39.278 fused_ordering(809) 00:14:39.278 fused_ordering(810) 00:14:39.278 fused_ordering(811) 00:14:39.278 fused_ordering(812) 00:14:39.278 fused_ordering(813) 00:14:39.278 fused_ordering(814) 00:14:39.278 fused_ordering(815) 00:14:39.278 fused_ordering(816) 00:14:39.278 fused_ordering(817) 00:14:39.278 fused_ordering(818) 00:14:39.278 fused_ordering(819) 00:14:39.278 fused_ordering(820) 00:14:40.216 fused_ordering(821) 00:14:40.216 fused_ordering(822) 00:14:40.216 fused_ordering(823) 00:14:40.216 fused_ordering(824) 00:14:40.216 fused_ordering(825) 00:14:40.216 fused_ordering(826) 00:14:40.216 fused_ordering(827) 00:14:40.216 fused_ordering(828) 00:14:40.216 fused_ordering(829) 00:14:40.216 fused_ordering(830) 00:14:40.216 fused_ordering(831) 00:14:40.216 fused_ordering(832) 00:14:40.216 fused_ordering(833) 00:14:40.216 fused_ordering(834) 00:14:40.216 fused_ordering(835) 00:14:40.216 fused_ordering(836) 00:14:40.216 fused_ordering(837) 00:14:40.216 fused_ordering(838) 00:14:40.216 fused_ordering(839) 00:14:40.216 fused_ordering(840) 00:14:40.216 fused_ordering(841) 00:14:40.216 fused_ordering(842) 00:14:40.216 fused_ordering(843) 00:14:40.216 fused_ordering(844) 00:14:40.216 fused_ordering(845) 00:14:40.216 fused_ordering(846) 00:14:40.216 fused_ordering(847) 00:14:40.216 fused_ordering(848) 00:14:40.216 fused_ordering(849) 00:14:40.216 fused_ordering(850) 00:14:40.216 fused_ordering(851) 00:14:40.216 fused_ordering(852) 00:14:40.216 fused_ordering(853) 00:14:40.216 fused_ordering(854) 00:14:40.216 fused_ordering(855) 00:14:40.216 fused_ordering(856) 00:14:40.216 fused_ordering(857) 00:14:40.216 fused_ordering(858) 00:14:40.216 fused_ordering(859) 00:14:40.216 fused_ordering(860) 00:14:40.216 fused_ordering(861) 00:14:40.216 fused_ordering(862) 00:14:40.216 fused_ordering(863) 00:14:40.216 fused_ordering(864) 00:14:40.216 fused_ordering(865) 00:14:40.216 fused_ordering(866) 00:14:40.216 fused_ordering(867) 00:14:40.216 fused_ordering(868) 00:14:40.216 fused_ordering(869) 00:14:40.216 fused_ordering(870) 00:14:40.216 fused_ordering(871) 00:14:40.216 fused_ordering(872) 00:14:40.216 fused_ordering(873) 00:14:40.216 fused_ordering(874) 00:14:40.216 fused_ordering(875) 00:14:40.216 fused_ordering(876) 00:14:40.216 fused_ordering(877) 00:14:40.216 fused_ordering(878) 00:14:40.216 fused_ordering(879) 00:14:40.216 fused_ordering(880) 00:14:40.216 fused_ordering(881) 00:14:40.216 fused_ordering(882) 00:14:40.216 fused_ordering(883) 00:14:40.216 fused_ordering(884) 00:14:40.216 fused_ordering(885) 00:14:40.216 fused_ordering(886) 00:14:40.216 fused_ordering(887) 00:14:40.216 fused_ordering(888) 00:14:40.216 fused_ordering(889) 00:14:40.216 fused_ordering(890) 00:14:40.216 fused_ordering(891) 00:14:40.216 fused_ordering(892) 00:14:40.216 fused_ordering(893) 00:14:40.216 fused_ordering(894) 00:14:40.216 fused_ordering(895) 00:14:40.216 fused_ordering(896) 00:14:40.216 fused_ordering(897) 00:14:40.216 fused_ordering(898) 00:14:40.216 fused_ordering(899) 00:14:40.216 fused_ordering(900) 00:14:40.216 fused_ordering(901) 00:14:40.216 fused_ordering(902) 00:14:40.216 fused_ordering(903) 00:14:40.216 fused_ordering(904) 00:14:40.216 fused_ordering(905) 00:14:40.216 fused_ordering(906) 00:14:40.216 fused_ordering(907) 00:14:40.216 fused_ordering(908) 00:14:40.216 fused_ordering(909) 00:14:40.216 fused_ordering(910) 00:14:40.216 fused_ordering(911) 00:14:40.216 fused_ordering(912) 00:14:40.216 fused_ordering(913) 00:14:40.216 fused_ordering(914) 00:14:40.216 fused_ordering(915) 00:14:40.216 fused_ordering(916) 00:14:40.216 fused_ordering(917) 00:14:40.216 fused_ordering(918) 00:14:40.216 fused_ordering(919) 00:14:40.216 fused_ordering(920) 00:14:40.216 fused_ordering(921) 00:14:40.216 fused_ordering(922) 00:14:40.216 fused_ordering(923) 00:14:40.216 fused_ordering(924) 00:14:40.216 fused_ordering(925) 00:14:40.216 fused_ordering(926) 00:14:40.216 fused_ordering(927) 00:14:40.216 fused_ordering(928) 00:14:40.216 fused_ordering(929) 00:14:40.216 fused_ordering(930) 00:14:40.216 fused_ordering(931) 00:14:40.216 fused_ordering(932) 00:14:40.216 fused_ordering(933) 00:14:40.216 fused_ordering(934) 00:14:40.216 fused_ordering(935) 00:14:40.216 fused_ordering(936) 00:14:40.216 fused_ordering(937) 00:14:40.216 fused_ordering(938) 00:14:40.216 fused_ordering(939) 00:14:40.216 fused_ordering(940) 00:14:40.216 fused_ordering(941) 00:14:40.216 fused_ordering(942) 00:14:40.216 fused_ordering(943) 00:14:40.216 fused_ordering(944) 00:14:40.216 fused_ordering(945) 00:14:40.216 fused_ordering(946) 00:14:40.216 fused_ordering(947) 00:14:40.216 fused_ordering(948) 00:14:40.216 fused_ordering(949) 00:14:40.216 fused_ordering(950) 00:14:40.216 fused_ordering(951) 00:14:40.216 fused_ordering(952) 00:14:40.216 fused_ordering(953) 00:14:40.216 fused_ordering(954) 00:14:40.216 fused_ordering(955) 00:14:40.216 fused_ordering(956) 00:14:40.216 fused_ordering(957) 00:14:40.216 fused_ordering(958) 00:14:40.216 fused_ordering(959) 00:14:40.216 fused_ordering(960) 00:14:40.216 fused_ordering(961) 00:14:40.216 fused_ordering(962) 00:14:40.216 fused_ordering(963) 00:14:40.216 fused_ordering(964) 00:14:40.216 fused_ordering(965) 00:14:40.216 fused_ordering(966) 00:14:40.216 fused_ordering(967) 00:14:40.216 fused_ordering(968) 00:14:40.216 fused_ordering(969) 00:14:40.216 fused_ordering(970) 00:14:40.216 fused_ordering(971) 00:14:40.216 fused_ordering(972) 00:14:40.216 fused_ordering(973) 00:14:40.216 fused_ordering(974) 00:14:40.216 fused_ordering(975) 00:14:40.216 fused_ordering(976) 00:14:40.216 fused_ordering(977) 00:14:40.216 fused_ordering(978) 00:14:40.216 fused_ordering(979) 00:14:40.216 fused_ordering(980) 00:14:40.216 fused_ordering(981) 00:14:40.216 fused_ordering(982) 00:14:40.216 fused_ordering(983) 00:14:40.216 fused_ordering(984) 00:14:40.216 fused_ordering(985) 00:14:40.216 fused_ordering(986) 00:14:40.216 fused_ordering(987) 00:14:40.216 fused_ordering(988) 00:14:40.216 fused_ordering(989) 00:14:40.216 fused_ordering(990) 00:14:40.216 fused_ordering(991) 00:14:40.216 fused_ordering(992) 00:14:40.216 fused_ordering(993) 00:14:40.216 fused_ordering(994) 00:14:40.216 fused_ordering(995) 00:14:40.216 fused_ordering(996) 00:14:40.216 fused_ordering(997) 00:14:40.216 fused_ordering(998) 00:14:40.216 fused_ordering(999) 00:14:40.216 fused_ordering(1000) 00:14:40.216 fused_ordering(1001) 00:14:40.216 fused_ordering(1002) 00:14:40.216 fused_ordering(1003) 00:14:40.216 fused_ordering(1004) 00:14:40.216 fused_ordering(1005) 00:14:40.216 fused_ordering(1006) 00:14:40.216 fused_ordering(1007) 00:14:40.216 fused_ordering(1008) 00:14:40.216 fused_ordering(1009) 00:14:40.216 fused_ordering(1010) 00:14:40.216 fused_ordering(1011) 00:14:40.216 fused_ordering(1012) 00:14:40.216 fused_ordering(1013) 00:14:40.216 fused_ordering(1014) 00:14:40.216 fused_ordering(1015) 00:14:40.216 fused_ordering(1016) 00:14:40.216 fused_ordering(1017) 00:14:40.216 fused_ordering(1018) 00:14:40.216 fused_ordering(1019) 00:14:40.216 fused_ordering(1020) 00:14:40.216 fused_ordering(1021) 00:14:40.216 fused_ordering(1022) 00:14:40.216 fused_ordering(1023) 00:14:40.216 03:46:58 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:40.216 03:46:58 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:40.216 03:46:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:40.216 03:46:58 -- nvmf/common.sh@116 -- # sync 00:14:40.216 03:46:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:40.216 03:46:58 -- nvmf/common.sh@119 -- # set +e 00:14:40.216 03:46:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:40.216 03:46:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:40.216 rmmod nvme_tcp 00:14:40.216 rmmod nvme_fabrics 00:14:40.216 rmmod nvme_keyring 00:14:40.216 03:46:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:40.216 03:46:59 -- nvmf/common.sh@123 -- # set -e 00:14:40.216 03:46:59 -- nvmf/common.sh@124 -- # return 0 00:14:40.216 03:46:59 -- nvmf/common.sh@477 -- # '[' -n 2346727 ']' 00:14:40.216 03:46:59 -- nvmf/common.sh@478 -- # killprocess 2346727 00:14:40.216 03:46:59 -- common/autotest_common.sh@926 -- # '[' -z 2346727 ']' 00:14:40.216 03:46:59 -- common/autotest_common.sh@930 -- # kill -0 2346727 00:14:40.216 03:46:59 -- common/autotest_common.sh@931 -- # uname 00:14:40.216 03:46:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:40.216 03:46:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2346727 00:14:40.216 03:46:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:40.216 03:46:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:40.216 03:46:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2346727' 00:14:40.216 killing process with pid 2346727 00:14:40.216 03:46:59 -- common/autotest_common.sh@945 -- # kill 2346727 00:14:40.216 03:46:59 -- common/autotest_common.sh@950 -- # wait 2346727 00:14:40.475 03:46:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:40.475 03:46:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:40.475 03:46:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:40.475 03:46:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.475 03:46:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:40.475 03:46:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.475 03:46:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.475 03:46:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.384 03:47:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:42.384 00:14:42.384 real 0m9.026s 00:14:42.384 user 0m6.795s 00:14:42.384 sys 0m4.255s 00:14:42.384 03:47:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.384 03:47:01 -- common/autotest_common.sh@10 -- # set +x 00:14:42.384 ************************************ 00:14:42.384 END TEST nvmf_fused_ordering 00:14:42.384 ************************************ 00:14:42.644 03:47:01 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:42.644 03:47:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:42.644 03:47:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.644 03:47:01 -- common/autotest_common.sh@10 -- # set +x 00:14:42.644 ************************************ 00:14:42.644 START TEST nvmf_delete_subsystem 00:14:42.644 ************************************ 00:14:42.644 03:47:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:42.644 * Looking for test storage... 00:14:42.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.644 03:47:01 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.644 03:47:01 -- nvmf/common.sh@7 -- # uname -s 00:14:42.644 03:47:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.644 03:47:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.644 03:47:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.644 03:47:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.644 03:47:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.644 03:47:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.644 03:47:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.644 03:47:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.644 03:47:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.644 03:47:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.644 03:47:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.644 03:47:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.644 03:47:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.644 03:47:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.644 03:47:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.644 03:47:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.644 03:47:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.644 03:47:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.644 03:47:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.644 03:47:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.644 03:47:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.644 03:47:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.644 03:47:01 -- paths/export.sh@5 -- # export PATH 00:14:42.644 03:47:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.644 03:47:01 -- nvmf/common.sh@46 -- # : 0 00:14:42.644 03:47:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:42.644 03:47:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:42.644 03:47:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:42.644 03:47:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.644 03:47:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.644 03:47:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:42.644 03:47:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:42.644 03:47:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:42.644 03:47:01 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:42.644 03:47:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:42.644 03:47:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.644 03:47:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:42.644 03:47:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:42.644 03:47:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:42.644 03:47:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.644 03:47:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.644 03:47:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.644 03:47:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:42.644 03:47:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:42.644 03:47:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:42.644 03:47:01 -- common/autotest_common.sh@10 -- # set +x 00:14:44.555 03:47:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:44.555 03:47:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:44.555 03:47:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:44.555 03:47:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:44.555 03:47:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:44.555 03:47:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:44.555 03:47:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:44.555 03:47:03 -- nvmf/common.sh@294 -- # net_devs=() 00:14:44.555 03:47:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:44.555 03:47:03 -- nvmf/common.sh@295 -- # e810=() 00:14:44.555 03:47:03 -- nvmf/common.sh@295 -- # local -ga e810 00:14:44.555 03:47:03 -- nvmf/common.sh@296 -- # x722=() 00:14:44.555 03:47:03 -- nvmf/common.sh@296 -- # local -ga x722 00:14:44.555 03:47:03 -- nvmf/common.sh@297 -- # mlx=() 00:14:44.555 03:47:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:44.555 03:47:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.555 03:47:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:44.555 03:47:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:44.555 03:47:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:44.555 03:47:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:44.555 03:47:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:44.555 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:44.555 03:47:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:44.555 03:47:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:44.555 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:44.555 03:47:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:44.555 03:47:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:44.555 03:47:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.555 03:47:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:44.555 03:47:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.555 03:47:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:44.555 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:44.555 03:47:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.555 03:47:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:44.555 03:47:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.555 03:47:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:44.555 03:47:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.555 03:47:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:44.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:44.555 03:47:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.555 03:47:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:44.555 03:47:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:44.555 03:47:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:44.555 03:47:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:44.555 03:47:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.555 03:47:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.555 03:47:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.555 03:47:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:44.555 03:47:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.555 03:47:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.555 03:47:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:44.555 03:47:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.555 03:47:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.555 03:47:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:44.555 03:47:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:44.555 03:47:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.555 03:47:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.555 03:47:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.555 03:47:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.555 03:47:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:44.555 03:47:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.555 03:47:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.555 03:47:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.555 03:47:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:44.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:44.555 00:14:44.555 --- 10.0.0.2 ping statistics --- 00:14:44.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.555 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:44.555 03:47:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:44.556 00:14:44.556 --- 10.0.0.1 ping statistics --- 00:14:44.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.556 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:44.556 03:47:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.556 03:47:03 -- nvmf/common.sh@410 -- # return 0 00:14:44.556 03:47:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:44.556 03:47:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.556 03:47:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:44.556 03:47:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:44.556 03:47:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.556 03:47:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:44.556 03:47:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:44.556 03:47:03 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:44.556 03:47:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:44.556 03:47:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:44.556 03:47:03 -- common/autotest_common.sh@10 -- # set +x 00:14:44.556 03:47:03 -- nvmf/common.sh@469 -- # nvmfpid=2349238 00:14:44.556 03:47:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:44.556 03:47:03 -- nvmf/common.sh@470 -- # waitforlisten 2349238 00:14:44.556 03:47:03 -- common/autotest_common.sh@819 -- # '[' -z 2349238 ']' 00:14:44.556 03:47:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.556 03:47:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:44.556 03:47:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.556 03:47:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:44.556 03:47:03 -- common/autotest_common.sh@10 -- # set +x 00:14:44.556 [2024-07-14 03:47:03.473174] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.556 [2024-07-14 03:47:03.473263] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.812 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.812 [2024-07-14 03:47:03.540892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:44.813 [2024-07-14 03:47:03.625641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:44.813 [2024-07-14 03:47:03.625795] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.813 [2024-07-14 03:47:03.625811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.813 [2024-07-14 03:47:03.625824] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.813 [2024-07-14 03:47:03.628887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.813 [2024-07-14 03:47:03.628893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.747 03:47:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:45.747 03:47:04 -- common/autotest_common.sh@852 -- # return 0 00:14:45.747 03:47:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:45.747 03:47:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:45.747 03:47:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.747 03:47:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.747 03:47:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.747 03:47:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.747 [2024-07-14 03:47:04.473581] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.747 03:47:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:45.747 03:47:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.747 03:47:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.747 03:47:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.747 03:47:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.747 03:47:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.747 [2024-07-14 03:47:04.489736] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.747 03:47:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:45.747 03:47:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.747 03:47:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.747 NULL1 00:14:45.747 03:47:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:45.747 03:47:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.747 03:47:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.747 Delay0 00:14:45.747 03:47:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.747 03:47:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.747 03:47:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.747 03:47:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@28 -- # perf_pid=2349397 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:45.747 03:47:04 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:45.747 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.747 [2024-07-14 03:47:04.564507] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.652 03:47:06 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.652 03:47:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.652 03:47:06 -- common/autotest_common.sh@10 -- # set +x 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 [2024-07-14 03:47:06.775937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d983f0 is same with the state(5) to be set 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 starting I/O failed: -6 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Read completed with error (sct=0, sc=8) 00:14:47.913 Write completed with error (sct=0, sc=8) 00:14:47.914 starting I/O failed: -6 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 starting I/O failed: -6 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 starting I/O failed: -6 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 starting I/O failed: -6 00:14:47.914 [2024-07-14 03:47:06.776676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa10c00c1d0 is same with the state(5) to be set 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Write completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:47.914 Read completed with error (sct=0, sc=8) 00:14:48.851 [2024-07-14 03:47:07.743961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed70 is same with the state(5) to be set 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 [2024-07-14 03:47:07.776516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa10c00bf20 is same with the state(5) to be set 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 [2024-07-14 03:47:07.776722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa10c00c480 is same with the state(5) to be set 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 [2024-07-14 03:47:07.777088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80230 is same with the state(5) to be set 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Read completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 Write completed with error (sct=0, sc=8) 00:14:48.851 [2024-07-14 03:47:07.779514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d98570 is same with the state(5) to be set 00:14:48.851 [2024-07-14 03:47:07.780127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ed70 (9): Bad file descriptor 00:14:48.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:48.851 03:47:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.851 03:47:07 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:48.851 03:47:07 -- target/delete_subsystem.sh@35 -- # kill -0 2349397 00:14:48.851 03:47:07 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:48.851 Initializing NVMe Controllers 00:14:48.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.851 Controller IO queue size 128, less than required. 00:14:48.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:48.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:48.851 Initialization complete. Launching workers. 00:14:48.851 ======================================================== 00:14:48.851 Latency(us) 00:14:48.851 Device Information : IOPS MiB/s Average min max 00:14:48.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.29 0.08 896887.86 682.99 1011370.26 00:14:48.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.40 0.07 934805.41 330.35 1011436.23 00:14:48.851 ======================================================== 00:14:48.851 Total : 322.69 0.16 914913.28 330.35 1011436.23 00:14:48.851 00:14:49.453 03:47:08 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:49.453 03:47:08 -- target/delete_subsystem.sh@35 -- # kill -0 2349397 00:14:49.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2349397) - No such process 00:14:49.453 03:47:08 -- target/delete_subsystem.sh@45 -- # NOT wait 2349397 00:14:49.453 03:47:08 -- common/autotest_common.sh@640 -- # local es=0 00:14:49.453 03:47:08 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2349397 00:14:49.453 03:47:08 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:49.453 03:47:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:49.453 03:47:08 -- common/autotest_common.sh@632 -- # type -t wait 00:14:49.453 03:47:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:49.453 03:47:08 -- common/autotest_common.sh@643 -- # wait 2349397 00:14:49.453 03:47:08 -- common/autotest_common.sh@643 -- # es=1 00:14:49.453 03:47:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:49.453 03:47:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:49.453 03:47:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:49.453 03:47:08 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:49.453 03:47:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.453 03:47:08 -- common/autotest_common.sh@10 -- # set +x 00:14:49.453 03:47:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.454 03:47:08 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.454 03:47:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.454 03:47:08 -- common/autotest_common.sh@10 -- # set +x 00:14:49.454 [2024-07-14 03:47:08.300099] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.454 03:47:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.454 03:47:08 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.454 03:47:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.454 03:47:08 -- common/autotest_common.sh@10 -- # set +x 00:14:49.454 03:47:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.454 03:47:08 -- target/delete_subsystem.sh@54 -- # perf_pid=2349872 00:14:49.454 03:47:08 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:49.454 03:47:08 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:49.454 03:47:08 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:49.454 03:47:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:49.454 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.454 [2024-07-14 03:47:08.366493] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.022 03:47:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.022 03:47:08 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:50.022 03:47:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.588 03:47:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.588 03:47:09 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:50.588 03:47:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.156 03:47:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.156 03:47:09 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:51.156 03:47:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.415 03:47:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.415 03:47:10 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:51.415 03:47:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.983 03:47:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.984 03:47:10 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:51.984 03:47:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.552 03:47:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:52.552 03:47:11 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:52.552 03:47:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.811 Initializing NVMe Controllers 00:14:52.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.811 Controller IO queue size 128, less than required. 00:14:52.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:52.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:52.811 Initialization complete. Launching workers. 00:14:52.811 ======================================================== 00:14:52.811 Latency(us) 00:14:52.811 Device Information : IOPS MiB/s Average min max 00:14:52.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003263.85 1000241.79 1011247.06 00:14:52.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005947.50 1000343.08 1013912.92 00:14:52.811 ======================================================== 00:14:52.811 Total : 256.00 0.12 1004605.68 1000241.79 1013912.92 00:14:52.811 00:14:53.070 03:47:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:53.070 03:47:11 -- target/delete_subsystem.sh@57 -- # kill -0 2349872 00:14:53.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2349872) - No such process 00:14:53.070 03:47:11 -- target/delete_subsystem.sh@67 -- # wait 2349872 00:14:53.070 03:47:11 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:53.070 03:47:11 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:53.070 03:47:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.070 03:47:11 -- nvmf/common.sh@116 -- # sync 00:14:53.070 03:47:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:53.070 03:47:11 -- nvmf/common.sh@119 -- # set +e 00:14:53.070 03:47:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.070 03:47:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:53.070 rmmod nvme_tcp 00:14:53.070 rmmod nvme_fabrics 00:14:53.070 rmmod nvme_keyring 00:14:53.070 03:47:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.070 03:47:11 -- nvmf/common.sh@123 -- # set -e 00:14:53.070 03:47:11 -- nvmf/common.sh@124 -- # return 0 00:14:53.070 03:47:11 -- nvmf/common.sh@477 -- # '[' -n 2349238 ']' 00:14:53.070 03:47:11 -- nvmf/common.sh@478 -- # killprocess 2349238 00:14:53.070 03:47:11 -- common/autotest_common.sh@926 -- # '[' -z 2349238 ']' 00:14:53.070 03:47:11 -- common/autotest_common.sh@930 -- # kill -0 2349238 00:14:53.070 03:47:11 -- common/autotest_common.sh@931 -- # uname 00:14:53.070 03:47:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.070 03:47:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2349238 00:14:53.070 03:47:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:53.070 03:47:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:53.070 03:47:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2349238' 00:14:53.070 killing process with pid 2349238 00:14:53.070 03:47:11 -- common/autotest_common.sh@945 -- # kill 2349238 00:14:53.070 03:47:11 -- common/autotest_common.sh@950 -- # wait 2349238 00:14:53.329 03:47:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.329 03:47:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:53.329 03:47:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:53.329 03:47:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.329 03:47:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:53.329 03:47:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.329 03:47:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.329 03:47:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.234 03:47:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:55.234 00:14:55.234 real 0m12.825s 00:14:55.234 user 0m29.255s 00:14:55.234 sys 0m2.993s 00:14:55.234 03:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.234 03:47:14 -- common/autotest_common.sh@10 -- # set +x 00:14:55.234 ************************************ 00:14:55.234 END TEST nvmf_delete_subsystem 00:14:55.234 ************************************ 00:14:55.493 03:47:14 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:55.493 03:47:14 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:55.493 03:47:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.493 03:47:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.493 03:47:14 -- common/autotest_common.sh@10 -- # set +x 00:14:55.493 ************************************ 00:14:55.493 START TEST nvmf_nvme_cli 00:14:55.493 ************************************ 00:14:55.493 03:47:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:55.493 * Looking for test storage... 00:14:55.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.493 03:47:14 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.493 03:47:14 -- nvmf/common.sh@7 -- # uname -s 00:14:55.493 03:47:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.493 03:47:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.493 03:47:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.493 03:47:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.493 03:47:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.493 03:47:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.493 03:47:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.493 03:47:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.493 03:47:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.493 03:47:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.493 03:47:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.493 03:47:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.493 03:47:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.493 03:47:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.493 03:47:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.493 03:47:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.493 03:47:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.493 03:47:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.493 03:47:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.493 03:47:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.493 03:47:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.493 03:47:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.493 03:47:14 -- paths/export.sh@5 -- # export PATH 00:14:55.493 03:47:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.493 03:47:14 -- nvmf/common.sh@46 -- # : 0 00:14:55.493 03:47:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.493 03:47:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.493 03:47:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.493 03:47:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.493 03:47:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.493 03:47:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.493 03:47:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.493 03:47:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.493 03:47:14 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.493 03:47:14 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.493 03:47:14 -- target/nvme_cli.sh@14 -- # devs=() 00:14:55.493 03:47:14 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:55.493 03:47:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.493 03:47:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.493 03:47:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.493 03:47:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.493 03:47:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.493 03:47:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.493 03:47:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.493 03:47:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.493 03:47:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:55.493 03:47:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:55.493 03:47:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:55.493 03:47:14 -- common/autotest_common.sh@10 -- # set +x 00:14:57.401 03:47:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.401 03:47:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:57.401 03:47:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:57.401 03:47:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:57.401 03:47:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:57.401 03:47:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:57.401 03:47:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:57.401 03:47:16 -- nvmf/common.sh@294 -- # net_devs=() 00:14:57.401 03:47:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:57.401 03:47:16 -- nvmf/common.sh@295 -- # e810=() 00:14:57.401 03:47:16 -- nvmf/common.sh@295 -- # local -ga e810 00:14:57.401 03:47:16 -- nvmf/common.sh@296 -- # x722=() 00:14:57.401 03:47:16 -- nvmf/common.sh@296 -- # local -ga x722 00:14:57.401 03:47:16 -- nvmf/common.sh@297 -- # mlx=() 00:14:57.401 03:47:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:57.401 03:47:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.401 03:47:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:57.401 03:47:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:57.401 03:47:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:57.401 03:47:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:57.401 03:47:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:57.401 03:47:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:57.401 03:47:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.401 03:47:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:57.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:57.401 03:47:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.401 03:47:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.401 03:47:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.401 03:47:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.401 03:47:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.402 03:47:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:57.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:57.402 03:47:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:57.402 03:47:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.402 03:47:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.402 03:47:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.402 03:47:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.402 03:47:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:57.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:57.402 03:47:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.402 03:47:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.402 03:47:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.402 03:47:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.402 03:47:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.402 03:47:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:57.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:57.402 03:47:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.402 03:47:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:57.402 03:47:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:57.402 03:47:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:57.402 03:47:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:57.402 03:47:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.402 03:47:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.402 03:47:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.402 03:47:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:57.402 03:47:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.402 03:47:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.402 03:47:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:57.402 03:47:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.402 03:47:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.402 03:47:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:57.402 03:47:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:57.402 03:47:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.402 03:47:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.402 03:47:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.402 03:47:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.402 03:47:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:57.402 03:47:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.402 03:47:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.402 03:47:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.662 03:47:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:57.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:14:57.662 00:14:57.662 --- 10.0.0.2 ping statistics --- 00:14:57.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.662 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:14:57.662 03:47:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:14:57.662 00:14:57.662 --- 10.0.0.1 ping statistics --- 00:14:57.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.662 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:57.662 03:47:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.662 03:47:16 -- nvmf/common.sh@410 -- # return 0 00:14:57.662 03:47:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.662 03:47:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.662 03:47:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:57.662 03:47:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:57.662 03:47:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.662 03:47:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:57.662 03:47:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:57.662 03:47:16 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:57.662 03:47:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.662 03:47:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:57.662 03:47:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.662 03:47:16 -- nvmf/common.sh@469 -- # nvmfpid=2352297 00:14:57.662 03:47:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.662 03:47:16 -- nvmf/common.sh@470 -- # waitforlisten 2352297 00:14:57.662 03:47:16 -- common/autotest_common.sh@819 -- # '[' -z 2352297 ']' 00:14:57.662 03:47:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.662 03:47:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.662 03:47:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.662 03:47:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.662 03:47:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.662 [2024-07-14 03:47:16.434281] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:57.662 [2024-07-14 03:47:16.434361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.662 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.662 [2024-07-14 03:47:16.500534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.662 [2024-07-14 03:47:16.589177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.662 [2024-07-14 03:47:16.589335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.662 [2024-07-14 03:47:16.589352] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.662 [2024-07-14 03:47:16.589365] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.662 [2024-07-14 03:47:16.589430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.662 [2024-07-14 03:47:16.589491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.662 [2024-07-14 03:47:16.589556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.662 [2024-07-14 03:47:16.589558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.598 03:47:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.598 03:47:17 -- common/autotest_common.sh@852 -- # return 0 00:14:58.598 03:47:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.598 03:47:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:58.598 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.598 03:47:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.598 03:47:17 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.598 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.598 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.598 [2024-07-14 03:47:17.398518] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.598 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.598 03:47:17 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.598 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.598 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.598 Malloc0 00:14:58.598 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.598 03:47:17 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:58.598 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.598 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.598 Malloc1 00:14:58.598 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.598 03:47:17 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:58.599 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.599 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.599 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.599 03:47:17 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.599 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.599 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.599 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.599 03:47:17 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.599 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.599 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.599 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.599 03:47:17 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.599 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.599 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.599 [2024-07-14 03:47:17.480050] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.599 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.599 03:47:17 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:58.599 03:47:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.599 03:47:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.599 03:47:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.599 03:47:17 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:58.859 00:14:58.859 Discovery Log Number of Records 2, Generation counter 2 00:14:58.859 =====Discovery Log Entry 0====== 00:14:58.859 trtype: tcp 00:14:58.859 adrfam: ipv4 00:14:58.859 subtype: current discovery subsystem 00:14:58.859 treq: not required 00:14:58.859 portid: 0 00:14:58.859 trsvcid: 4420 00:14:58.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:58.859 traddr: 10.0.0.2 00:14:58.859 eflags: explicit discovery connections, duplicate discovery information 00:14:58.859 sectype: none 00:14:58.859 =====Discovery Log Entry 1====== 00:14:58.859 trtype: tcp 00:14:58.859 adrfam: ipv4 00:14:58.859 subtype: nvme subsystem 00:14:58.859 treq: not required 00:14:58.859 portid: 0 00:14:58.859 trsvcid: 4420 00:14:58.859 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:58.859 traddr: 10.0.0.2 00:14:58.859 eflags: none 00:14:58.859 sectype: none 00:14:58.859 03:47:17 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:58.859 03:47:17 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:58.859 03:47:17 -- nvmf/common.sh@510 -- # local dev _ 00:14:58.859 03:47:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:58.859 03:47:17 -- nvmf/common.sh@509 -- # nvme list 00:14:58.859 03:47:17 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:58.859 03:47:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:58.859 03:47:17 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:58.859 03:47:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:58.859 03:47:17 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:58.859 03:47:17 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.429 03:47:18 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:59.429 03:47:18 -- common/autotest_common.sh@1177 -- # local i=0 00:14:59.429 03:47:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.429 03:47:18 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:14:59.429 03:47:18 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:14:59.429 03:47:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:01.334 03:47:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:01.334 03:47:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:01.334 03:47:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.334 03:47:20 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:01.334 03:47:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.334 03:47:20 -- common/autotest_common.sh@1187 -- # return 0 00:15:01.334 03:47:20 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:01.334 03:47:20 -- nvmf/common.sh@510 -- # local dev _ 00:15:01.334 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.334 03:47:20 -- nvmf/common.sh@509 -- # nvme list 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:01.591 /dev/nvme0n1 ]] 00:15:01.591 03:47:20 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:01.591 03:47:20 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:01.591 03:47:20 -- nvmf/common.sh@510 -- # local dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@509 -- # nvme list 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:01.591 03:47:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:01.591 03:47:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:01.591 03:47:20 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:01.591 03:47:20 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.849 03:47:20 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.849 03:47:20 -- common/autotest_common.sh@1198 -- # local i=0 00:15:01.849 03:47:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:01.849 03:47:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.849 03:47:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:01.849 03:47:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.849 03:47:20 -- common/autotest_common.sh@1210 -- # return 0 00:15:01.849 03:47:20 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:01.849 03:47:20 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.849 03:47:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.849 03:47:20 -- common/autotest_common.sh@10 -- # set +x 00:15:01.849 03:47:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.849 03:47:20 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:01.849 03:47:20 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:01.849 03:47:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.849 03:47:20 -- nvmf/common.sh@116 -- # sync 00:15:01.849 03:47:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.849 03:47:20 -- nvmf/common.sh@119 -- # set +e 00:15:01.849 03:47:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.849 03:47:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.849 rmmod nvme_tcp 00:15:02.109 rmmod nvme_fabrics 00:15:02.109 rmmod nvme_keyring 00:15:02.109 03:47:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.109 03:47:20 -- nvmf/common.sh@123 -- # set -e 00:15:02.109 03:47:20 -- nvmf/common.sh@124 -- # return 0 00:15:02.109 03:47:20 -- nvmf/common.sh@477 -- # '[' -n 2352297 ']' 00:15:02.109 03:47:20 -- nvmf/common.sh@478 -- # killprocess 2352297 00:15:02.109 03:47:20 -- common/autotest_common.sh@926 -- # '[' -z 2352297 ']' 00:15:02.110 03:47:20 -- common/autotest_common.sh@930 -- # kill -0 2352297 00:15:02.110 03:47:20 -- common/autotest_common.sh@931 -- # uname 00:15:02.110 03:47:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.110 03:47:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2352297 00:15:02.110 03:47:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.110 03:47:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.110 03:47:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2352297' 00:15:02.110 killing process with pid 2352297 00:15:02.110 03:47:20 -- common/autotest_common.sh@945 -- # kill 2352297 00:15:02.110 03:47:20 -- common/autotest_common.sh@950 -- # wait 2352297 00:15:02.369 03:47:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.369 03:47:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.369 03:47:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.369 03:47:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.369 03:47:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.369 03:47:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.369 03:47:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.369 03:47:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.296 03:47:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:04.296 00:15:04.296 real 0m8.984s 00:15:04.296 user 0m18.653s 00:15:04.296 sys 0m2.232s 00:15:04.296 03:47:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.296 03:47:23 -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 ************************************ 00:15:04.296 END TEST nvmf_nvme_cli 00:15:04.296 ************************************ 00:15:04.296 03:47:23 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:04.296 03:47:23 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:04.296 03:47:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:04.296 03:47:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:04.296 03:47:23 -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 ************************************ 00:15:04.296 START TEST nvmf_vfio_user 00:15:04.296 ************************************ 00:15:04.297 03:47:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:04.554 * Looking for test storage... 00:15:04.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.554 03:47:23 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.554 03:47:23 -- nvmf/common.sh@7 -- # uname -s 00:15:04.554 03:47:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.554 03:47:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.554 03:47:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.554 03:47:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.554 03:47:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.554 03:47:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.554 03:47:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.554 03:47:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.554 03:47:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.554 03:47:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.554 03:47:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.554 03:47:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.554 03:47:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.554 03:47:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.554 03:47:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.554 03:47:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.554 03:47:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.554 03:47:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.554 03:47:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.554 03:47:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.555 03:47:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.555 03:47:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.555 03:47:23 -- paths/export.sh@5 -- # export PATH 00:15:04.555 03:47:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.555 03:47:23 -- nvmf/common.sh@46 -- # : 0 00:15:04.555 03:47:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.555 03:47:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.555 03:47:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.555 03:47:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.555 03:47:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.555 03:47:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.555 03:47:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.555 03:47:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2353252 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2353252' 00:15:04.555 Process pid: 2353252 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:04.555 03:47:23 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2353252 00:15:04.555 03:47:23 -- common/autotest_common.sh@819 -- # '[' -z 2353252 ']' 00:15:04.555 03:47:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.555 03:47:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:04.555 03:47:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.555 03:47:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:04.555 03:47:23 -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 [2024-07-14 03:47:23.316729] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:04.555 [2024-07-14 03:47:23.316802] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.555 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.555 [2024-07-14 03:47:23.375030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.555 [2024-07-14 03:47:23.460662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:04.555 [2024-07-14 03:47:23.460824] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.555 [2024-07-14 03:47:23.460843] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.555 [2024-07-14 03:47:23.460858] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.555 [2024-07-14 03:47:23.460931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.555 [2024-07-14 03:47:23.461002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.555 [2024-07-14 03:47:23.461096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.555 [2024-07-14 03:47:23.461098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.499 03:47:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:05.499 03:47:24 -- common/autotest_common.sh@852 -- # return 0 00:15:05.499 03:47:24 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:06.470 03:47:25 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:06.728 03:47:25 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:06.728 03:47:25 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:06.728 03:47:25 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.728 03:47:25 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:06.728 03:47:25 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:06.985 Malloc1 00:15:06.985 03:47:25 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:07.268 03:47:26 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:07.526 03:47:26 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:07.784 03:47:26 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:07.784 03:47:26 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:07.784 03:47:26 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:08.042 Malloc2 00:15:08.042 03:47:26 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:08.300 03:47:27 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:08.558 03:47:27 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:08.819 03:47:27 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:08.819 03:47:27 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:08.819 03:47:27 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:08.819 03:47:27 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:08.819 03:47:27 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:08.819 03:47:27 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:08.819 [2024-07-14 03:47:27.544894] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:08.819 [2024-07-14 03:47:27.544933] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353706 ] 00:15:08.819 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.819 [2024-07-14 03:47:27.579351] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:08.819 [2024-07-14 03:47:27.588276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.819 [2024-07-14 03:47:27.588315] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2a67651000 00:15:08.819 [2024-07-14 03:47:27.589276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.590271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.591275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.592281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.593286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.594288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.595290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.596308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.819 [2024-07-14 03:47:27.597314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.819 [2024-07-14 03:47:27.597335] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2a66405000 00:15:08.819 [2024-07-14 03:47:27.598464] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.819 [2024-07-14 03:47:27.613654] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:08.819 [2024-07-14 03:47:27.613703] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:08.819 [2024-07-14 03:47:27.618437] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:08.819 [2024-07-14 03:47:27.618493] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:08.819 [2024-07-14 03:47:27.618584] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:08.819 [2024-07-14 03:47:27.618616] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:08.819 [2024-07-14 03:47:27.618627] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:08.819 [2024-07-14 03:47:27.619431] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:08.819 [2024-07-14 03:47:27.619452] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:08.819 [2024-07-14 03:47:27.619464] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:08.819 [2024-07-14 03:47:27.620434] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:08.819 [2024-07-14 03:47:27.620454] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:08.819 [2024-07-14 03:47:27.620467] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:08.819 [2024-07-14 03:47:27.621444] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:08.819 [2024-07-14 03:47:27.621462] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:08.819 [2024-07-14 03:47:27.622455] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:08.819 [2024-07-14 03:47:27.622475] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:08.819 [2024-07-14 03:47:27.622484] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:08.819 [2024-07-14 03:47:27.622496] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:08.819 [2024-07-14 03:47:27.622608] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:08.819 [2024-07-14 03:47:27.622617] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:08.819 [2024-07-14 03:47:27.622625] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:08.819 [2024-07-14 03:47:27.623461] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:08.819 [2024-07-14 03:47:27.624469] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:08.819 [2024-07-14 03:47:27.625480] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:08.819 [2024-07-14 03:47:27.627889] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:08.819 [2024-07-14 03:47:27.628495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:08.819 [2024-07-14 03:47:27.628512] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:08.819 [2024-07-14 03:47:27.628520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:08.819 [2024-07-14 03:47:27.628545] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:08.819 [2024-07-14 03:47:27.628564] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:08.819 [2024-07-14 03:47:27.628587] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.819 [2024-07-14 03:47:27.628597] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.819 [2024-07-14 03:47:27.628618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.819 [2024-07-14 03:47:27.628686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:08.819 [2024-07-14 03:47:27.628702] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:08.819 [2024-07-14 03:47:27.628710] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:08.819 [2024-07-14 03:47:27.628717] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:08.819 [2024-07-14 03:47:27.628724] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:08.819 [2024-07-14 03:47:27.628733] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:08.819 [2024-07-14 03:47:27.628740] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:08.819 [2024-07-14 03:47:27.628748] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:08.819 [2024-07-14 03:47:27.628764] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:08.819 [2024-07-14 03:47:27.628779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:08.819 [2024-07-14 03:47:27.628795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:08.819 [2024-07-14 03:47:27.628815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.819 [2024-07-14 03:47:27.628828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.819 [2024-07-14 03:47:27.628843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.819 [2024-07-14 03:47:27.628878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.819 [2024-07-14 03:47:27.628887] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:08.819 [2024-07-14 03:47:27.628903] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:08.819 [2024-07-14 03:47:27.628917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:08.819 [2024-07-14 03:47:27.628929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:08.819 [2024-07-14 03:47:27.628940] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:08.819 [2024-07-14 03:47:27.628948] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.628959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.628972] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.628986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629065] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629078] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629092] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:08.820 [2024-07-14 03:47:27.629100] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:08.820 [2024-07-14 03:47:27.629109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629147] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:08.820 [2024-07-14 03:47:27.629162] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629201] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629212] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.820 [2024-07-14 03:47:27.629220] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.820 [2024-07-14 03:47:27.629229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629276] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629290] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629301] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.820 [2024-07-14 03:47:27.629309] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.820 [2024-07-14 03:47:27.629318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629347] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629358] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629381] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629390] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629398] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:08.820 [2024-07-14 03:47:27.629406] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:08.820 [2024-07-14 03:47:27.629414] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:08.820 [2024-07-14 03:47:27.629439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629558] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:08.820 [2024-07-14 03:47:27.629566] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:08.820 [2024-07-14 03:47:27.629572] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:08.820 [2024-07-14 03:47:27.629578] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:08.820 [2024-07-14 03:47:27.629590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:08.820 [2024-07-14 03:47:27.629602] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:08.820 [2024-07-14 03:47:27.629610] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:08.820 [2024-07-14 03:47:27.629618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629628] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:08.820 [2024-07-14 03:47:27.629636] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.820 [2024-07-14 03:47:27.629644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629655] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:08.820 [2024-07-14 03:47:27.629663] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:08.820 [2024-07-14 03:47:27.629672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:08.820 [2024-07-14 03:47:27.629683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:08.820 [2024-07-14 03:47:27.629727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:08.820 ===================================================== 00:15:08.820 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.820 ===================================================== 00:15:08.820 Controller Capabilities/Features 00:15:08.820 ================================ 00:15:08.820 Vendor ID: 4e58 00:15:08.820 Subsystem Vendor ID: 4e58 00:15:08.820 Serial Number: SPDK1 00:15:08.820 Model Number: SPDK bdev Controller 00:15:08.820 Firmware Version: 24.01.1 00:15:08.820 Recommended Arb Burst: 6 00:15:08.820 IEEE OUI Identifier: 8d 6b 50 00:15:08.820 Multi-path I/O 00:15:08.820 May have multiple subsystem ports: Yes 00:15:08.820 May have multiple controllers: Yes 00:15:08.820 Associated with SR-IOV VF: No 00:15:08.820 Max Data Transfer Size: 131072 00:15:08.820 Max Number of Namespaces: 32 00:15:08.820 Max Number of I/O Queues: 127 00:15:08.820 NVMe Specification Version (VS): 1.3 00:15:08.820 NVMe Specification Version (Identify): 1.3 00:15:08.820 Maximum Queue Entries: 256 00:15:08.820 Contiguous Queues Required: Yes 00:15:08.820 Arbitration Mechanisms Supported 00:15:08.820 Weighted Round Robin: Not Supported 00:15:08.820 Vendor Specific: Not Supported 00:15:08.820 Reset Timeout: 15000 ms 00:15:08.820 Doorbell Stride: 4 bytes 00:15:08.820 NVM Subsystem Reset: Not Supported 00:15:08.820 Command Sets Supported 00:15:08.820 NVM Command Set: Supported 00:15:08.820 Boot Partition: Not Supported 00:15:08.820 Memory Page Size Minimum: 4096 bytes 00:15:08.820 Memory Page Size Maximum: 4096 bytes 00:15:08.820 Persistent Memory Region: Not Supported 00:15:08.820 Optional Asynchronous Events Supported 00:15:08.820 Namespace Attribute Notices: Supported 00:15:08.820 Firmware Activation Notices: Not Supported 00:15:08.820 ANA Change Notices: Not Supported 00:15:08.820 PLE Aggregate Log Change Notices: Not Supported 00:15:08.820 LBA Status Info Alert Notices: Not Supported 00:15:08.820 EGE Aggregate Log Change Notices: Not Supported 00:15:08.820 Normal NVM Subsystem Shutdown event: Not Supported 00:15:08.820 Zone Descriptor Change Notices: Not Supported 00:15:08.820 Discovery Log Change Notices: Not Supported 00:15:08.820 Controller Attributes 00:15:08.820 128-bit Host Identifier: Supported 00:15:08.820 Non-Operational Permissive Mode: Not Supported 00:15:08.820 NVM Sets: Not Supported 00:15:08.820 Read Recovery Levels: Not Supported 00:15:08.820 Endurance Groups: Not Supported 00:15:08.820 Predictable Latency Mode: Not Supported 00:15:08.820 Traffic Based Keep ALive: Not Supported 00:15:08.820 Namespace Granularity: Not Supported 00:15:08.820 SQ Associations: Not Supported 00:15:08.820 UUID List: Not Supported 00:15:08.820 Multi-Domain Subsystem: Not Supported 00:15:08.820 Fixed Capacity Management: Not Supported 00:15:08.820 Variable Capacity Management: Not Supported 00:15:08.820 Delete Endurance Group: Not Supported 00:15:08.820 Delete NVM Set: Not Supported 00:15:08.820 Extended LBA Formats Supported: Not Supported 00:15:08.820 Flexible Data Placement Supported: Not Supported 00:15:08.820 00:15:08.820 Controller Memory Buffer Support 00:15:08.820 ================================ 00:15:08.820 Supported: No 00:15:08.820 00:15:08.820 Persistent Memory Region Support 00:15:08.821 ================================ 00:15:08.821 Supported: No 00:15:08.821 00:15:08.821 Admin Command Set Attributes 00:15:08.821 ============================ 00:15:08.821 Security Send/Receive: Not Supported 00:15:08.821 Format NVM: Not Supported 00:15:08.821 Firmware Activate/Download: Not Supported 00:15:08.821 Namespace Management: Not Supported 00:15:08.821 Device Self-Test: Not Supported 00:15:08.821 Directives: Not Supported 00:15:08.821 NVMe-MI: Not Supported 00:15:08.821 Virtualization Management: Not Supported 00:15:08.821 Doorbell Buffer Config: Not Supported 00:15:08.821 Get LBA Status Capability: Not Supported 00:15:08.821 Command & Feature Lockdown Capability: Not Supported 00:15:08.821 Abort Command Limit: 4 00:15:08.821 Async Event Request Limit: 4 00:15:08.821 Number of Firmware Slots: N/A 00:15:08.821 Firmware Slot 1 Read-Only: N/A 00:15:08.821 Firmware Activation Without Reset: N/A 00:15:08.821 Multiple Update Detection Support: N/A 00:15:08.821 Firmware Update Granularity: No Information Provided 00:15:08.821 Per-Namespace SMART Log: No 00:15:08.821 Asymmetric Namespace Access Log Page: Not Supported 00:15:08.821 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:08.821 Command Effects Log Page: Supported 00:15:08.821 Get Log Page Extended Data: Supported 00:15:08.821 Telemetry Log Pages: Not Supported 00:15:08.821 Persistent Event Log Pages: Not Supported 00:15:08.821 Supported Log Pages Log Page: May Support 00:15:08.821 Commands Supported & Effects Log Page: Not Supported 00:15:08.821 Feature Identifiers & Effects Log Page:May Support 00:15:08.821 NVMe-MI Commands & Effects Log Page: May Support 00:15:08.821 Data Area 4 for Telemetry Log: Not Supported 00:15:08.821 Error Log Page Entries Supported: 128 00:15:08.821 Keep Alive: Supported 00:15:08.821 Keep Alive Granularity: 10000 ms 00:15:08.821 00:15:08.821 NVM Command Set Attributes 00:15:08.821 ========================== 00:15:08.821 Submission Queue Entry Size 00:15:08.821 Max: 64 00:15:08.821 Min: 64 00:15:08.821 Completion Queue Entry Size 00:15:08.821 Max: 16 00:15:08.821 Min: 16 00:15:08.821 Number of Namespaces: 32 00:15:08.821 Compare Command: Supported 00:15:08.821 Write Uncorrectable Command: Not Supported 00:15:08.821 Dataset Management Command: Supported 00:15:08.821 Write Zeroes Command: Supported 00:15:08.821 Set Features Save Field: Not Supported 00:15:08.821 Reservations: Not Supported 00:15:08.821 Timestamp: Not Supported 00:15:08.821 Copy: Supported 00:15:08.821 Volatile Write Cache: Present 00:15:08.821 Atomic Write Unit (Normal): 1 00:15:08.821 Atomic Write Unit (PFail): 1 00:15:08.821 Atomic Compare & Write Unit: 1 00:15:08.821 Fused Compare & Write: Supported 00:15:08.821 Scatter-Gather List 00:15:08.821 SGL Command Set: Supported (Dword aligned) 00:15:08.821 SGL Keyed: Not Supported 00:15:08.821 SGL Bit Bucket Descriptor: Not Supported 00:15:08.821 SGL Metadata Pointer: Not Supported 00:15:08.821 Oversized SGL: Not Supported 00:15:08.821 SGL Metadata Address: Not Supported 00:15:08.821 SGL Offset: Not Supported 00:15:08.821 Transport SGL Data Block: Not Supported 00:15:08.821 Replay Protected Memory Block: Not Supported 00:15:08.821 00:15:08.821 Firmware Slot Information 00:15:08.821 ========================= 00:15:08.821 Active slot: 1 00:15:08.821 Slot 1 Firmware Revision: 24.01.1 00:15:08.821 00:15:08.821 00:15:08.821 Commands Supported and Effects 00:15:08.821 ============================== 00:15:08.821 Admin Commands 00:15:08.821 -------------- 00:15:08.821 Get Log Page (02h): Supported 00:15:08.821 Identify (06h): Supported 00:15:08.821 Abort (08h): Supported 00:15:08.821 Set Features (09h): Supported 00:15:08.821 Get Features (0Ah): Supported 00:15:08.821 Asynchronous Event Request (0Ch): Supported 00:15:08.821 Keep Alive (18h): Supported 00:15:08.821 I/O Commands 00:15:08.821 ------------ 00:15:08.821 Flush (00h): Supported LBA-Change 00:15:08.821 Write (01h): Supported LBA-Change 00:15:08.821 Read (02h): Supported 00:15:08.821 Compare (05h): Supported 00:15:08.821 Write Zeroes (08h): Supported LBA-Change 00:15:08.821 Dataset Management (09h): Supported LBA-Change 00:15:08.821 Copy (19h): Supported LBA-Change 00:15:08.821 Unknown (79h): Supported LBA-Change 00:15:08.821 Unknown (7Ah): Supported 00:15:08.821 00:15:08.821 Error Log 00:15:08.821 ========= 00:15:08.821 00:15:08.821 Arbitration 00:15:08.821 =========== 00:15:08.821 Arbitration Burst: 1 00:15:08.821 00:15:08.821 Power Management 00:15:08.821 ================ 00:15:08.821 Number of Power States: 1 00:15:08.821 Current Power State: Power State #0 00:15:08.821 Power State #0: 00:15:08.821 Max Power: 0.00 W 00:15:08.821 Non-Operational State: Operational 00:15:08.821 Entry Latency: Not Reported 00:15:08.821 Exit Latency: Not Reported 00:15:08.821 Relative Read Throughput: 0 00:15:08.821 Relative Read Latency: 0 00:15:08.821 Relative Write Throughput: 0 00:15:08.821 Relative Write Latency: 0 00:15:08.821 Idle Power: Not Reported 00:15:08.821 Active Power: Not Reported 00:15:08.821 Non-Operational Permissive Mode: Not Supported 00:15:08.821 00:15:08.821 Health Information 00:15:08.821 ================== 00:15:08.821 Critical Warnings: 00:15:08.821 Available Spare Space: OK 00:15:08.821 Temperature: OK 00:15:08.821 Device Reliability: OK 00:15:08.821 Read Only: No 00:15:08.821 Volatile Memory Backup: OK 00:15:08.821 Current Temperature: 0 Kelvin[2024-07-14 03:47:27.629979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:08.821 [2024-07-14 03:47:27.629996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:08.821 [2024-07-14 03:47:27.630034] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:08.821 [2024-07-14 03:47:27.630052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.821 [2024-07-14 03:47:27.630063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.821 [2024-07-14 03:47:27.630072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.821 [2024-07-14 03:47:27.630082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.821 [2024-07-14 03:47:27.632877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:08.821 [2024-07-14 03:47:27.632899] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:08.821 [2024-07-14 03:47:27.633572] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:08.821 [2024-07-14 03:47:27.633585] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:08.821 [2024-07-14 03:47:27.634535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:08.821 [2024-07-14 03:47:27.634557] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:08.821 [2024-07-14 03:47:27.634616] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:08.821 [2024-07-14 03:47:27.636581] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.821 (-273 Celsius) 00:15:08.821 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:08.821 Available Spare: 0% 00:15:08.821 Available Spare Threshold: 0% 00:15:08.821 Life Percentage Used: 0% 00:15:08.821 Data Units Read: 0 00:15:08.821 Data Units Written: 0 00:15:08.821 Host Read Commands: 0 00:15:08.821 Host Write Commands: 0 00:15:08.821 Controller Busy Time: 0 minutes 00:15:08.821 Power Cycles: 0 00:15:08.822 Power On Hours: 0 hours 00:15:08.822 Unsafe Shutdowns: 0 00:15:08.822 Unrecoverable Media Errors: 0 00:15:08.822 Lifetime Error Log Entries: 0 00:15:08.822 Warning Temperature Time: 0 minutes 00:15:08.822 Critical Temperature Time: 0 minutes 00:15:08.822 00:15:08.822 Number of Queues 00:15:08.822 ================ 00:15:08.822 Number of I/O Submission Queues: 127 00:15:08.822 Number of I/O Completion Queues: 127 00:15:08.822 00:15:08.822 Active Namespaces 00:15:08.822 ================= 00:15:08.822 Namespace ID:1 00:15:08.822 Error Recovery Timeout: Unlimited 00:15:08.822 Command Set Identifier: NVM (00h) 00:15:08.822 Deallocate: Supported 00:15:08.822 Deallocated/Unwritten Error: Not Supported 00:15:08.822 Deallocated Read Value: Unknown 00:15:08.822 Deallocate in Write Zeroes: Not Supported 00:15:08.822 Deallocated Guard Field: 0xFFFF 00:15:08.822 Flush: Supported 00:15:08.822 Reservation: Supported 00:15:08.822 Namespace Sharing Capabilities: Multiple Controllers 00:15:08.822 Size (in LBAs): 131072 (0GiB) 00:15:08.822 Capacity (in LBAs): 131072 (0GiB) 00:15:08.822 Utilization (in LBAs): 131072 (0GiB) 00:15:08.822 NGUID: E028DDF3619B4B1DAB5839D160320DC5 00:15:08.822 UUID: e028ddf3-619b-4b1d-ab58-39d160320dc5 00:15:08.822 Thin Provisioning: Not Supported 00:15:08.822 Per-NS Atomic Units: Yes 00:15:08.822 Atomic Boundary Size (Normal): 0 00:15:08.822 Atomic Boundary Size (PFail): 0 00:15:08.822 Atomic Boundary Offset: 0 00:15:08.822 Maximum Single Source Range Length: 65535 00:15:08.822 Maximum Copy Length: 65535 00:15:08.822 Maximum Source Range Count: 1 00:15:08.822 NGUID/EUI64 Never Reused: No 00:15:08.822 Namespace Write Protected: No 00:15:08.822 Number of LBA Formats: 1 00:15:08.822 Current LBA Format: LBA Format #00 00:15:08.822 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:08.822 00:15:08.822 03:47:27 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:08.822 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.095 Initializing NVMe Controllers 00:15:14.095 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:14.095 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:14.095 Initialization complete. Launching workers. 00:15:14.095 ======================================================== 00:15:14.095 Latency(us) 00:15:14.095 Device Information : IOPS MiB/s Average min max 00:15:14.095 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 37093.35 144.90 3450.19 1163.17 7399.43 00:15:14.095 ======================================================== 00:15:14.095 Total : 37093.35 144.90 3450.19 1163.17 7399.43 00:15:14.095 00:15:14.095 03:47:32 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:14.095 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.369 Initializing NVMe Controllers 00:15:19.369 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:19.369 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:19.369 Initialization complete. Launching workers. 00:15:19.369 ======================================================== 00:15:19.369 Latency(us) 00:15:19.369 Device Information : IOPS MiB/s Average min max 00:15:19.369 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.59 62.60 7997.17 6955.72 15964.21 00:15:19.369 ======================================================== 00:15:19.369 Total : 16025.59 62.60 7997.17 6955.72 15964.21 00:15:19.369 00:15:19.369 03:47:38 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:19.369 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.644 Initializing NVMe Controllers 00:15:24.644 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:24.644 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:24.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:24.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:24.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:24.644 Initialization complete. Launching workers. 00:15:24.644 Starting thread on core 2 00:15:24.644 Starting thread on core 3 00:15:24.644 Starting thread on core 1 00:15:24.644 03:47:43 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:24.644 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.836 Initializing NVMe Controllers 00:15:28.836 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.836 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.836 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:28.836 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:28.836 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:28.836 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:28.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:28.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:28.836 Initialization complete. Launching workers. 00:15:28.836 Starting thread on core 1 with urgent priority queue 00:15:28.836 Starting thread on core 2 with urgent priority queue 00:15:28.836 Starting thread on core 3 with urgent priority queue 00:15:28.836 Starting thread on core 0 with urgent priority queue 00:15:28.836 SPDK bdev Controller (SPDK1 ) core 0: 1539.00 IO/s 64.98 secs/100000 ios 00:15:28.836 SPDK bdev Controller (SPDK1 ) core 1: 1240.33 IO/s 80.62 secs/100000 ios 00:15:28.836 SPDK bdev Controller (SPDK1 ) core 2: 1520.00 IO/s 65.79 secs/100000 ios 00:15:28.836 SPDK bdev Controller (SPDK1 ) core 3: 1642.00 IO/s 60.90 secs/100000 ios 00:15:28.836 ======================================================== 00:15:28.836 00:15:28.836 03:47:47 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:28.836 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.836 Initializing NVMe Controllers 00:15:28.836 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.836 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.836 Namespace ID: 1 size: 0GB 00:15:28.836 Initialization complete. 00:15:28.836 INFO: using host memory buffer for IO 00:15:28.836 Hello world! 00:15:28.836 03:47:47 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:28.836 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.775 Initializing NVMe Controllers 00:15:29.775 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.775 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.775 Initialization complete. Launching workers. 00:15:29.775 submit (in ns) avg, min, max = 8321.3, 3438.9, 4017016.7 00:15:29.775 complete (in ns) avg, min, max = 25065.2, 2041.1, 4019008.9 00:15:29.775 00:15:29.775 Submit histogram 00:15:29.775 ================ 00:15:29.775 Range in us Cumulative Count 00:15:29.775 3.437 - 3.461: 0.2267% ( 31) 00:15:29.775 3.461 - 3.484: 0.8777% ( 89) 00:15:29.775 3.484 - 3.508: 2.5234% ( 225) 00:15:29.775 3.508 - 3.532: 7.6946% ( 707) 00:15:29.775 3.532 - 3.556: 14.2262% ( 893) 00:15:29.775 3.556 - 3.579: 24.8683% ( 1455) 00:15:29.775 3.579 - 3.603: 33.9380% ( 1240) 00:15:29.775 3.603 - 3.627: 42.7516% ( 1205) 00:15:29.775 3.627 - 3.650: 49.2686% ( 891) 00:15:29.775 3.650 - 3.674: 54.4178% ( 704) 00:15:29.775 3.674 - 3.698: 58.2066% ( 518) 00:15:29.775 3.698 - 3.721: 61.8417% ( 497) 00:15:29.775 3.721 - 3.745: 65.2501% ( 466) 00:15:29.775 3.745 - 3.769: 68.2855% ( 415) 00:15:29.775 3.769 - 3.793: 71.5404% ( 445) 00:15:29.775 3.793 - 3.816: 75.3950% ( 527) 00:15:29.775 3.816 - 3.840: 79.5860% ( 573) 00:15:29.775 3.840 - 3.864: 82.8920% ( 452) 00:15:29.775 3.864 - 3.887: 85.2399% ( 321) 00:15:29.775 3.887 - 3.911: 86.9149% ( 229) 00:15:29.775 3.911 - 3.935: 88.3558% ( 197) 00:15:29.775 3.935 - 3.959: 89.7089% ( 185) 00:15:29.775 3.959 - 3.982: 90.6232% ( 125) 00:15:29.775 3.982 - 4.006: 91.3985% ( 106) 00:15:29.775 4.006 - 4.030: 91.9836% ( 80) 00:15:29.775 4.030 - 4.053: 92.4956% ( 70) 00:15:29.775 4.053 - 4.077: 92.9783% ( 66) 00:15:29.775 4.077 - 4.101: 93.4977% ( 71) 00:15:29.775 4.101 - 4.124: 93.8561% ( 49) 00:15:29.775 4.124 - 4.148: 94.1852% ( 45) 00:15:29.775 4.148 - 4.172: 94.3754% ( 26) 00:15:29.775 4.172 - 4.196: 94.6314% ( 35) 00:15:29.775 4.196 - 4.219: 94.9093% ( 38) 00:15:29.775 4.219 - 4.243: 95.0922% ( 25) 00:15:29.775 4.243 - 4.267: 95.3116% ( 30) 00:15:29.775 4.267 - 4.290: 95.5310% ( 30) 00:15:29.775 4.290 - 4.314: 95.6554% ( 17) 00:15:29.775 4.314 - 4.338: 95.8455% ( 26) 00:15:29.775 4.338 - 4.361: 95.9479% ( 14) 00:15:29.775 4.361 - 4.385: 96.0503% ( 14) 00:15:29.775 4.385 - 4.409: 96.1381% ( 12) 00:15:29.775 4.409 - 4.433: 96.2478% ( 15) 00:15:29.775 4.433 - 4.456: 96.3063% ( 8) 00:15:29.775 4.456 - 4.480: 96.3868% ( 11) 00:15:29.775 4.480 - 4.504: 96.4453% ( 8) 00:15:29.775 4.504 - 4.527: 96.5111% ( 9) 00:15:29.775 4.527 - 4.551: 96.5550% ( 6) 00:15:29.775 4.551 - 4.575: 96.5696% ( 2) 00:15:29.775 4.575 - 4.599: 96.6135% ( 6) 00:15:29.775 4.599 - 4.622: 96.6720% ( 8) 00:15:29.775 4.622 - 4.646: 96.7086% ( 5) 00:15:29.775 4.670 - 4.693: 96.7232% ( 2) 00:15:29.775 4.693 - 4.717: 96.7452% ( 3) 00:15:29.775 4.717 - 4.741: 96.7525% ( 1) 00:15:29.775 4.741 - 4.764: 96.7671% ( 2) 00:15:29.775 4.764 - 4.788: 96.7891% ( 3) 00:15:29.775 4.788 - 4.812: 96.8037% ( 2) 00:15:29.775 4.812 - 4.836: 96.8329% ( 4) 00:15:29.775 4.836 - 4.859: 96.8768% ( 6) 00:15:29.775 4.859 - 4.883: 96.9427% ( 9) 00:15:29.775 4.883 - 4.907: 96.9646% ( 3) 00:15:29.775 4.907 - 4.930: 97.0231% ( 8) 00:15:29.775 4.930 - 4.954: 97.0597% ( 5) 00:15:29.775 4.954 - 4.978: 97.0889% ( 4) 00:15:29.775 4.978 - 5.001: 97.1036% ( 2) 00:15:29.775 5.001 - 5.025: 97.1475% ( 6) 00:15:29.775 5.025 - 5.049: 97.1987% ( 7) 00:15:29.775 5.049 - 5.073: 97.2352% ( 5) 00:15:29.775 5.073 - 5.096: 97.2791% ( 6) 00:15:29.775 5.096 - 5.120: 97.3230% ( 6) 00:15:29.775 5.120 - 5.144: 97.3523% ( 4) 00:15:29.775 5.144 - 5.167: 97.3742% ( 3) 00:15:29.775 5.167 - 5.191: 97.3961% ( 3) 00:15:29.775 5.191 - 5.215: 97.4327% ( 5) 00:15:29.775 5.215 - 5.239: 97.4547% ( 3) 00:15:29.775 5.239 - 5.262: 97.4766% ( 3) 00:15:29.775 5.262 - 5.286: 97.4985% ( 3) 00:15:29.775 5.286 - 5.310: 97.5205% ( 3) 00:15:29.775 5.310 - 5.333: 97.5424% ( 3) 00:15:29.775 5.333 - 5.357: 97.5497% ( 1) 00:15:29.775 5.357 - 5.381: 97.5571% ( 1) 00:15:29.775 5.404 - 5.428: 97.5717% ( 2) 00:15:29.775 5.428 - 5.452: 97.5936% ( 3) 00:15:29.775 5.452 - 5.476: 97.6009% ( 1) 00:15:29.775 5.476 - 5.499: 97.6229% ( 3) 00:15:29.775 5.499 - 5.523: 97.6375% ( 2) 00:15:29.775 5.547 - 5.570: 97.6521% ( 2) 00:15:29.775 5.570 - 5.594: 97.6668% ( 2) 00:15:29.775 5.594 - 5.618: 97.6887% ( 3) 00:15:29.775 5.641 - 5.665: 97.7033% ( 2) 00:15:29.775 5.665 - 5.689: 97.7253% ( 3) 00:15:29.775 5.689 - 5.713: 97.7472% ( 3) 00:15:29.775 5.713 - 5.736: 97.7545% ( 1) 00:15:29.775 5.736 - 5.760: 97.7618% ( 1) 00:15:29.775 5.760 - 5.784: 97.7692% ( 1) 00:15:29.775 5.879 - 5.902: 97.7765% ( 1) 00:15:29.775 5.902 - 5.926: 97.7838% ( 1) 00:15:29.775 5.926 - 5.950: 97.7911% ( 1) 00:15:29.775 6.068 - 6.116: 97.8130% ( 3) 00:15:29.775 6.210 - 6.258: 97.8204% ( 1) 00:15:29.775 6.258 - 6.305: 97.8277% ( 1) 00:15:29.775 6.353 - 6.400: 97.8423% ( 2) 00:15:29.775 6.542 - 6.590: 97.8569% ( 2) 00:15:29.775 6.637 - 6.684: 97.8642% ( 1) 00:15:29.775 6.684 - 6.732: 97.8862% ( 3) 00:15:29.775 6.732 - 6.779: 97.8935% ( 1) 00:15:29.775 6.921 - 6.969: 97.9081% ( 2) 00:15:29.775 7.064 - 7.111: 97.9154% ( 1) 00:15:29.775 7.111 - 7.159: 97.9301% ( 2) 00:15:29.775 7.253 - 7.301: 97.9520% ( 3) 00:15:29.775 7.348 - 7.396: 97.9593% ( 1) 00:15:29.775 7.538 - 7.585: 97.9666% ( 1) 00:15:29.775 7.585 - 7.633: 97.9813% ( 2) 00:15:29.775 7.633 - 7.680: 98.0105% ( 4) 00:15:29.775 7.680 - 7.727: 98.0178% ( 1) 00:15:29.775 7.727 - 7.775: 98.0252% ( 1) 00:15:29.775 7.775 - 7.822: 98.0471% ( 3) 00:15:29.775 7.822 - 7.870: 98.0544% ( 1) 00:15:29.775 7.870 - 7.917: 98.0617% ( 1) 00:15:29.775 7.917 - 7.964: 98.0764% ( 2) 00:15:29.775 7.964 - 8.012: 98.1056% ( 4) 00:15:29.775 8.012 - 8.059: 98.1202% ( 2) 00:15:29.775 8.107 - 8.154: 98.1422% ( 3) 00:15:29.775 8.154 - 8.201: 98.1568% ( 2) 00:15:29.775 8.201 - 8.249: 98.1641% ( 1) 00:15:29.775 8.249 - 8.296: 98.1934% ( 4) 00:15:29.775 8.296 - 8.344: 98.2007% ( 1) 00:15:29.775 8.344 - 8.391: 98.2080% ( 1) 00:15:29.775 8.533 - 8.581: 98.2226% ( 2) 00:15:29.775 8.581 - 8.628: 98.2592% ( 5) 00:15:29.775 8.628 - 8.676: 98.2885% ( 4) 00:15:29.775 8.723 - 8.770: 98.3031% ( 2) 00:15:29.775 8.770 - 8.818: 98.3104% ( 1) 00:15:29.775 8.818 - 8.865: 98.3177% ( 1) 00:15:29.775 8.865 - 8.913: 98.3324% ( 2) 00:15:29.775 8.913 - 8.960: 98.3397% ( 1) 00:15:29.775 8.960 - 9.007: 98.3470% ( 1) 00:15:29.775 9.007 - 9.055: 98.3543% ( 1) 00:15:29.775 9.102 - 9.150: 98.3689% ( 2) 00:15:29.775 9.150 - 9.197: 98.3836% ( 2) 00:15:29.775 9.292 - 9.339: 98.3982% ( 2) 00:15:29.775 9.339 - 9.387: 98.4055% ( 1) 00:15:29.775 9.387 - 9.434: 98.4128% ( 1) 00:15:29.775 9.434 - 9.481: 98.4201% ( 1) 00:15:29.775 9.481 - 9.529: 98.4348% ( 2) 00:15:29.775 9.529 - 9.576: 98.4421% ( 1) 00:15:29.775 9.576 - 9.624: 98.4494% ( 1) 00:15:29.775 9.624 - 9.671: 98.4713% ( 3) 00:15:29.775 9.671 - 9.719: 98.4786% ( 1) 00:15:29.775 9.719 - 9.766: 98.5006% ( 3) 00:15:29.775 9.766 - 9.813: 98.5079% ( 1) 00:15:29.776 9.813 - 9.861: 98.5225% ( 2) 00:15:29.776 10.098 - 10.145: 98.5298% ( 1) 00:15:29.776 10.193 - 10.240: 98.5518% ( 3) 00:15:29.776 10.335 - 10.382: 98.5591% ( 1) 00:15:29.776 10.430 - 10.477: 98.5810% ( 3) 00:15:29.776 10.477 - 10.524: 98.5884% ( 1) 00:15:29.776 10.524 - 10.572: 98.5957% ( 1) 00:15:29.776 10.572 - 10.619: 98.6030% ( 1) 00:15:29.776 10.761 - 10.809: 98.6103% ( 1) 00:15:29.776 10.809 - 10.856: 98.6176% ( 1) 00:15:29.776 10.856 - 10.904: 98.6249% ( 1) 00:15:29.776 10.904 - 10.951: 98.6322% ( 1) 00:15:29.776 10.951 - 10.999: 98.6396% ( 1) 00:15:29.776 10.999 - 11.046: 98.6469% ( 1) 00:15:29.776 11.046 - 11.093: 98.6542% ( 1) 00:15:29.776 11.093 - 11.141: 98.6615% ( 1) 00:15:29.776 11.188 - 11.236: 98.6688% ( 1) 00:15:29.776 11.473 - 11.520: 98.6761% ( 1) 00:15:29.776 11.615 - 11.662: 98.6834% ( 1) 00:15:29.776 11.852 - 11.899: 98.6908% ( 1) 00:15:29.776 11.994 - 12.041: 98.6981% ( 1) 00:15:29.776 12.136 - 12.231: 98.7054% ( 1) 00:15:29.776 12.231 - 12.326: 98.7200% ( 2) 00:15:29.776 12.326 - 12.421: 98.7273% ( 1) 00:15:29.776 12.421 - 12.516: 98.7346% ( 1) 00:15:29.776 12.516 - 12.610: 98.7420% ( 1) 00:15:29.776 12.705 - 12.800: 98.7639% ( 3) 00:15:29.776 12.990 - 13.084: 98.7712% ( 1) 00:15:29.776 13.369 - 13.464: 98.7785% ( 1) 00:15:29.776 13.464 - 13.559: 98.7932% ( 2) 00:15:29.776 13.559 - 13.653: 98.8005% ( 1) 00:15:29.776 13.748 - 13.843: 98.8078% ( 1) 00:15:29.776 13.843 - 13.938: 98.8297% ( 3) 00:15:29.776 14.033 - 14.127: 98.8444% ( 2) 00:15:29.776 14.127 - 14.222: 98.8517% ( 1) 00:15:29.776 14.222 - 14.317: 98.8590% ( 1) 00:15:29.776 14.412 - 14.507: 98.8663% ( 1) 00:15:29.776 14.507 - 14.601: 98.8736% ( 1) 00:15:29.776 14.601 - 14.696: 98.8809% ( 1) 00:15:29.776 14.791 - 14.886: 98.8882% ( 1) 00:15:29.776 14.886 - 14.981: 98.8956% ( 1) 00:15:29.776 15.360 - 15.455: 98.9029% ( 1) 00:15:29.776 15.550 - 15.644: 98.9102% ( 1) 00:15:29.776 16.213 - 16.308: 98.9175% ( 1) 00:15:29.776 17.161 - 17.256: 98.9248% ( 1) 00:15:29.776 17.256 - 17.351: 98.9321% ( 1) 00:15:29.776 17.351 - 17.446: 98.9541% ( 3) 00:15:29.776 17.446 - 17.541: 98.9760% ( 3) 00:15:29.776 17.541 - 17.636: 99.0126% ( 5) 00:15:29.776 17.636 - 17.730: 99.0345% ( 3) 00:15:29.776 17.730 - 17.825: 99.0930% ( 8) 00:15:29.776 17.825 - 17.920: 99.1442% ( 7) 00:15:29.776 17.920 - 18.015: 99.1881% ( 6) 00:15:29.776 18.015 - 18.110: 99.2320% ( 6) 00:15:29.776 18.110 - 18.204: 99.3125% ( 11) 00:15:29.776 18.204 - 18.299: 99.4002% ( 12) 00:15:29.776 18.299 - 18.394: 99.5026% ( 14) 00:15:29.776 18.394 - 18.489: 99.5392% ( 5) 00:15:29.776 18.489 - 18.584: 99.5977% ( 8) 00:15:29.776 18.584 - 18.679: 99.6050% ( 1) 00:15:29.776 18.679 - 18.773: 99.6416% ( 5) 00:15:29.776 18.773 - 18.868: 99.7074% ( 9) 00:15:29.776 18.868 - 18.963: 99.7367% ( 4) 00:15:29.776 18.963 - 19.058: 99.7440% ( 1) 00:15:29.776 19.058 - 19.153: 99.7513% ( 1) 00:15:29.776 19.247 - 19.342: 99.7659% ( 2) 00:15:29.776 19.437 - 19.532: 99.7733% ( 1) 00:15:29.776 19.532 - 19.627: 99.7806% ( 1) 00:15:29.776 19.627 - 19.721: 99.7952% ( 2) 00:15:29.776 19.721 - 19.816: 99.8025% ( 1) 00:15:29.776 19.816 - 19.911: 99.8098% ( 1) 00:15:29.776 19.911 - 20.006: 99.8171% ( 1) 00:15:29.776 20.006 - 20.101: 99.8391% ( 3) 00:15:29.776 20.385 - 20.480: 99.8464% ( 1) 00:15:29.776 20.575 - 20.670: 99.8537% ( 1) 00:15:29.776 20.859 - 20.954: 99.8610% ( 1) 00:15:29.776 21.144 - 21.239: 99.8683% ( 1) 00:15:29.776 24.652 - 24.841: 99.8757% ( 1) 00:15:29.776 29.013 - 29.203: 99.8830% ( 1) 00:15:29.776 33.944 - 34.133: 99.8903% ( 1) 00:15:29.776 3980.705 - 4004.978: 99.9634% ( 10) 00:15:29.776 4004.978 - 4029.250: 100.0000% ( 5) 00:15:29.776 00:15:29.776 Complete histogram 00:15:29.776 ================== 00:15:29.776 Range in us Cumulative Count 00:15:29.776 2.039 - 2.050: 1.9675% ( 269) 00:15:29.776 2.050 - 2.062: 18.0881% ( 2204) 00:15:29.776 2.062 - 2.074: 26.2215% ( 1112) 00:15:29.776 2.074 - 2.086: 36.2785% ( 1375) 00:15:29.776 2.086 - 2.098: 55.0322% ( 2564) 00:15:29.776 2.098 - 2.110: 60.0644% ( 688) 00:15:29.776 2.110 - 2.121: 63.4801% ( 467) 00:15:29.776 2.121 - 2.133: 68.1612% ( 640) 00:15:29.776 2.133 - 2.145: 70.4286% ( 310) 00:15:29.776 2.145 - 2.157: 75.1609% ( 647) 00:15:29.776 2.157 - 2.169: 80.6246% ( 747) 00:15:29.776 2.169 - 2.181: 82.3435% ( 235) 00:15:29.776 2.181 - 2.193: 84.2305% ( 258) 00:15:29.776 2.193 - 2.204: 86.4906% ( 309) 00:15:29.776 2.204 - 2.216: 87.6024% ( 152) 00:15:29.776 2.216 - 2.228: 90.2794% ( 366) 00:15:29.776 2.228 - 2.240: 92.7004% ( 331) 00:15:29.776 2.240 - 2.252: 93.6220% ( 126) 00:15:29.776 2.252 - 2.264: 94.0023% ( 52) 00:15:29.776 2.264 - 2.276: 94.3534% ( 48) 00:15:29.776 2.276 - 2.287: 94.5509% ( 27) 00:15:29.776 2.287 - 2.299: 94.9605% ( 56) 00:15:29.776 2.299 - 2.311: 95.2458% ( 39) 00:15:29.776 2.311 - 2.323: 95.4213% ( 24) 00:15:29.776 2.323 - 2.335: 95.4798% ( 8) 00:15:29.776 2.335 - 2.347: 95.6992% ( 30) 00:15:29.776 2.347 - 2.359: 95.9626% ( 36) 00:15:29.776 2.359 - 2.370: 96.2697% ( 42) 00:15:29.776 2.370 - 2.382: 96.6501% ( 52) 00:15:29.776 2.382 - 2.394: 96.9427% ( 40) 00:15:29.776 2.394 - 2.406: 97.2718% ( 45) 00:15:29.776 2.406 - 2.418: 97.4108% ( 19) 00:15:29.776 2.418 - 2.430: 97.5351% ( 17) 00:15:29.776 2.430 - 2.441: 97.6083% ( 10) 00:15:29.776 2.441 - 2.453: 97.7106% ( 14) 00:15:29.776 2.453 - 2.465: 97.8642% ( 21) 00:15:29.776 2.465 - 2.477: 97.9447% ( 11) 00:15:29.776 2.477 - 2.489: 97.9886% ( 6) 00:15:29.776 2.489 - 2.501: 98.0178% ( 4) 00:15:29.776 2.501 - 2.513: 98.0325% ( 2) 00:15:29.776 2.513 - 2.524: 98.0398% ( 1) 00:15:29.776 2.524 - 2.536: 98.0471% ( 1) 00:15:29.776 2.536 - 2.548: 98.0544% ( 1) 00:15:29.776 2.548 - 2.560: 98.0690% ( 2) 00:15:29.776 2.560 - 2.572: 98.0837% ( 2) 00:15:29.776 2.584 - 2.596: 98.0910% ( 1) 00:15:29.776 2.596 - 2.607: 98.1056% ( 2) 00:15:29.776 2.607 - 2.619: 98.1129% ( 1) 00:15:29.776 2.619 - 2.631: 98.1202% ( 1) 00:15:29.776 2.631 - 2.643: 98.1349% ( 2) 00:15:29.776 2.643 - 2.655: 98.1495% ( 2) 00:15:29.776 2.667 - 2.679: 98.1568% ( 1) 00:15:29.776 2.702 - 2.714: 98.1641% ( 1) 00:15:29.776 2.750 - 2.761: 98.1714% ( 1) 00:15:29.776 2.785 - 2.797: 98.1788% ( 1) 00:15:29.776 2.797 - 2.809: 98.1861% ( 1) 00:15:29.776 2.809 - 2.821: 98.2007% ( 2) 00:15:29.776 2.868 - 2.880: 98.2080% ( 1) 00:15:29.776 2.880 - 2.892: 98.2153% ( 1) 00:15:29.776 2.892 - 2.904: 98.2226% ( 1) 00:15:29.776 2.939 - 2.951: 98.2300% ( 1) 00:15:29.776 2.975 - 2.987: 98.2446% ( 2) 00:15:29.776 3.022 - 3.034: 98.2519% ( 1) 00:15:29.776 3.034 - 3.058: 98.2592% ( 1) 00:15:29.776 3.058 - 3.081: 98.2738% ( 2) 00:15:29.776 3.081 - 3.105: 98.2812% ( 1) 00:15:29.776 3.105 - 3.129: 98.2958% ( 2) 00:15:29.776 3.129 - 3.153: 98.3177% ( 3) 00:15:29.776 3.176 - 3.200: 98.3324% ( 2) 00:15:29.776 3.224 - 3.247: 98.3543% ( 3) 00:15:29.776 3.247 - 3.271: 98.3762% ( 3) 00:15:29.776 3.271 - 3.295: 98.3909% ( 2) 00:15:29.776 3.295 - 3.319: 98.4201% ( 4) 00:15:29.776 3.319 - 3.342: 98.4494% ( 4) 00:15:29.776 3.342 - 3.366: 98.4567% ( 1) 00:15:29.776 3.390 - 3.413: 98.4786% ( 3) 00:15:29.776 3.413 - 3.437: 98.4933% ( 2) 00:15:29.776 3.461 - 3.484: 98.5079% ( 2) 00:15:29.776 3.508 - 3.532: 98.5225% ( 2) 00:15:29.776 3.556 - 3.579: 98.5298% ( 1) 00:15:29.776 3.579 - 3.603: 98.5372% ( 1) 00:15:29.776 3.627 - 3.650: 98.5445% ( 1) 00:15:29.776 3.674 - 3.698: 98.5591% ( 2) 00:15:29.776 3.698 - 3.721: 98.5664% ( 1) 00:15:29.776 3.745 - 3.769: 98.5737% ( 1) 00:15:29.776 3.769 - 3.793: 98.5810% ( 1) 00:15:29.776 3.935 - 3.959: 98.5884% ( 1) 00:15:29.776 4.053 - 4.077: 98.5957% ( 1) 00:15:29.776 4.409 - 4.433: 98.6030% ( 1) 00:15:29.776 5.476 - 5.499: 98.6103% ( 1) 00:15:29.776 5.784 - 5.807: 98.6176% ( 1) 00:15:29.776 5.855 - 5.879: 98.6249% ( 1) 00:15:29.776 5.902 - 5.926: 98.6322% ( 1) 00:15:29.776 6.116 - 6.163: 98.6396% ( 1) 00:15:29.776 6.305 - 6.353: 98.6469% ( 1) 00:15:29.776 6.353 - 6.400: 98.6542% ( 1) 00:15:29.776 6.637 - 6.684: 98.6615% ( 1) 00:15:29.776 6.969 - 7.016: 98.6688% ( 1) 00:15:29.776 7.159 - 7.206: 98.6761% ( 1) 00:15:29.776 7.206 - 7.253: 98.6834% ( 1) 00:15:29.776 7.253 - 7.301: 98.6908% ( 1) 00:15:29.776 8.581 - 8.628: 98.7054% ( 2) 00:15:29.776 12.326 - 12.421: 98.7127% ( 1) 00:15:29.776 13.369 - 13.464: 98.7200% ( 1) 00:15:29.776 15.455 - 15.550: 98.7273% ( 1) 00:15:29.776 15.550 - 15.644: 98.7566% ( 4) 00:15:29.776 15.644 - 15.739: 98.7712% ( 2) 00:15:29.776 15.739 - 15.834: 98.8005% ( 4) 00:15:29.776 15.834 - 15.929: 98.8151% ( 2) 00:15:29.776 15.929 - 16.024: 98.8590% ( 6) 00:15:29.776 16.024 - 16.119: 98.8663% ( 1) 00:15:29.776 16.119 - 16.213: 98.8956% ( 4) 00:15:29.776 16.213 - 16.308: 98.9321% ( 5) 00:15:29.777 16.308 - 16.403: 98.9760% ( 6) 00:15:29.777 16.403 - 16.498: 98.9833% ( 1) 00:15:29.777 16.498 - 16.593: 99.0126% ( 4) 00:15:29.777 16.593 - 16.687: 99.0857% ( 10) 00:15:29.777 16.687 - 16.782: 99.1369% ( 7) 00:15:29.777 16.782 - 16.877: 99.1954% ( 8) 00:15:29.777 16.877 - 16.972: 99.2247% ( 4) 00:15:29.777 16.972 - 17.067: 99.2539% ( 4) 00:15:29.777 17.067 - 17.161: 99.2905% ( 5) 00:15:29.777 17.161 - 17.256: 99.3271% ( 5) 00:15:29.777 17.446 - 17.541: 99.3637% ( 5) 00:15:29.777 17.920 - 18.015: 99.3710% ( 1) 00:15:29.777 18.015 - 18.110: 99.3783% ( 1) 00:15:29.777 18.110 - 18.204: 99.3856% ( 1) 00:15:29.777 18.299 - 18.394: 99.3929% ( 1) 00:15:29.777 18.868 - 18.963: 99.4002% ( 1) 00:15:29.777 18.963 - 19.058: 99.4075% ( 1) 00:15:29.777 19.532 - 19.627: 99.4149% ( 1) 00:15:29.777 25.790 - 25.979: 99.4222% ( 1) 00:15:29.777 26.359 - 26.548: 99.4295% ( 1) 00:15:29.777 3980.705 - 4004.978: 99.8391% ( 56) 00:15:29.777 4004.978 - 4029.250: 100.0000% ( 22) 00:15:29.777 00:15:29.777 03:47:48 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:30.035 03:47:48 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:30.035 03:47:48 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:30.035 03:47:48 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:30.035 03:47:48 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:30.035 [2024-07-14 03:47:48.965963] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:30.035 [ 00:15:30.035 { 00:15:30.035 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.035 "subtype": "Discovery", 00:15:30.035 "listen_addresses": [], 00:15:30.035 "allow_any_host": true, 00:15:30.035 "hosts": [] 00:15:30.035 }, 00:15:30.035 { 00:15:30.035 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:30.035 "subtype": "NVMe", 00:15:30.035 "listen_addresses": [ 00:15:30.035 { 00:15:30.035 "transport": "VFIOUSER", 00:15:30.035 "trtype": "VFIOUSER", 00:15:30.035 "adrfam": "IPv4", 00:15:30.035 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:30.035 "trsvcid": "0" 00:15:30.035 } 00:15:30.035 ], 00:15:30.035 "allow_any_host": true, 00:15:30.035 "hosts": [], 00:15:30.035 "serial_number": "SPDK1", 00:15:30.035 "model_number": "SPDK bdev Controller", 00:15:30.035 "max_namespaces": 32, 00:15:30.035 "min_cntlid": 1, 00:15:30.035 "max_cntlid": 65519, 00:15:30.035 "namespaces": [ 00:15:30.035 { 00:15:30.035 "nsid": 1, 00:15:30.035 "bdev_name": "Malloc1", 00:15:30.035 "name": "Malloc1", 00:15:30.035 "nguid": "E028DDF3619B4B1DAB5839D160320DC5", 00:15:30.035 "uuid": "e028ddf3-619b-4b1d-ab58-39d160320dc5" 00:15:30.035 } 00:15:30.035 ] 00:15:30.035 }, 00:15:30.035 { 00:15:30.035 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:30.035 "subtype": "NVMe", 00:15:30.035 "listen_addresses": [ 00:15:30.035 { 00:15:30.035 "transport": "VFIOUSER", 00:15:30.035 "trtype": "VFIOUSER", 00:15:30.035 "adrfam": "IPv4", 00:15:30.035 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:30.035 "trsvcid": "0" 00:15:30.035 } 00:15:30.035 ], 00:15:30.035 "allow_any_host": true, 00:15:30.035 "hosts": [], 00:15:30.035 "serial_number": "SPDK2", 00:15:30.035 "model_number": "SPDK bdev Controller", 00:15:30.035 "max_namespaces": 32, 00:15:30.035 "min_cntlid": 1, 00:15:30.035 "max_cntlid": 65519, 00:15:30.035 "namespaces": [ 00:15:30.035 { 00:15:30.035 "nsid": 1, 00:15:30.035 "bdev_name": "Malloc2", 00:15:30.035 "name": "Malloc2", 00:15:30.035 "nguid": "AB1E232E0660446498BACF2C10077257", 00:15:30.035 "uuid": "ab1e232e-0660-4464-98ba-cf2c10077257" 00:15:30.035 } 00:15:30.035 ] 00:15:30.035 } 00:15:30.035 ] 00:15:30.293 03:47:48 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:30.293 03:47:48 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2356304 00:15:30.293 03:47:48 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:30.293 03:47:48 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:30.293 03:47:48 -- common/autotest_common.sh@1244 -- # local i=0 00:15:30.293 03:47:48 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:30.293 03:47:48 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:30.293 03:47:48 -- common/autotest_common.sh@1255 -- # return 0 00:15:30.293 03:47:48 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:30.294 03:47:48 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:30.294 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.553 Malloc3 00:15:30.553 03:47:49 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:30.844 03:47:49 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:30.844 Asynchronous Event Request test 00:15:30.844 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:30.844 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:30.844 Registering asynchronous event callbacks... 00:15:30.844 Starting namespace attribute notice tests for all controllers... 00:15:30.844 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:30.844 aer_cb - Changed Namespace 00:15:30.844 Cleaning up... 00:15:30.844 [ 00:15:30.844 { 00:15:30.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.844 "subtype": "Discovery", 00:15:30.844 "listen_addresses": [], 00:15:30.844 "allow_any_host": true, 00:15:30.844 "hosts": [] 00:15:30.844 }, 00:15:30.844 { 00:15:30.844 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:30.844 "subtype": "NVMe", 00:15:30.844 "listen_addresses": [ 00:15:30.844 { 00:15:30.844 "transport": "VFIOUSER", 00:15:30.844 "trtype": "VFIOUSER", 00:15:30.844 "adrfam": "IPv4", 00:15:30.844 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:30.844 "trsvcid": "0" 00:15:30.844 } 00:15:30.844 ], 00:15:30.844 "allow_any_host": true, 00:15:30.844 "hosts": [], 00:15:30.844 "serial_number": "SPDK1", 00:15:30.844 "model_number": "SPDK bdev Controller", 00:15:30.844 "max_namespaces": 32, 00:15:30.844 "min_cntlid": 1, 00:15:30.844 "max_cntlid": 65519, 00:15:30.844 "namespaces": [ 00:15:30.844 { 00:15:30.844 "nsid": 1, 00:15:30.845 "bdev_name": "Malloc1", 00:15:30.845 "name": "Malloc1", 00:15:30.845 "nguid": "E028DDF3619B4B1DAB5839D160320DC5", 00:15:30.845 "uuid": "e028ddf3-619b-4b1d-ab58-39d160320dc5" 00:15:30.845 }, 00:15:30.845 { 00:15:30.845 "nsid": 2, 00:15:30.845 "bdev_name": "Malloc3", 00:15:30.845 "name": "Malloc3", 00:15:30.845 "nguid": "D3320615D1DE4FFAACF1AE3AA50407EE", 00:15:30.845 "uuid": "d3320615-d1de-4ffa-acf1-ae3aa50407ee" 00:15:30.845 } 00:15:30.845 ] 00:15:30.845 }, 00:15:30.845 { 00:15:30.845 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:30.845 "subtype": "NVMe", 00:15:30.845 "listen_addresses": [ 00:15:30.845 { 00:15:30.845 "transport": "VFIOUSER", 00:15:30.845 "trtype": "VFIOUSER", 00:15:30.845 "adrfam": "IPv4", 00:15:30.845 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:30.845 "trsvcid": "0" 00:15:30.845 } 00:15:30.845 ], 00:15:30.845 "allow_any_host": true, 00:15:30.845 "hosts": [], 00:15:30.845 "serial_number": "SPDK2", 00:15:30.845 "model_number": "SPDK bdev Controller", 00:15:30.845 "max_namespaces": 32, 00:15:30.845 "min_cntlid": 1, 00:15:30.845 "max_cntlid": 65519, 00:15:30.845 "namespaces": [ 00:15:30.845 { 00:15:30.845 "nsid": 1, 00:15:30.845 "bdev_name": "Malloc2", 00:15:30.845 "name": "Malloc2", 00:15:30.845 "nguid": "AB1E232E0660446498BACF2C10077257", 00:15:30.845 "uuid": "ab1e232e-0660-4464-98ba-cf2c10077257" 00:15:30.845 } 00:15:30.845 ] 00:15:30.845 } 00:15:30.845 ] 00:15:30.845 03:47:49 -- target/nvmf_vfio_user.sh@44 -- # wait 2356304 00:15:30.845 03:47:49 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.845 03:47:49 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:30.845 03:47:49 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:30.845 03:47:49 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:30.845 [2024-07-14 03:47:49.750486] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:30.845 [2024-07-14 03:47:49.750530] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356428 ] 00:15:30.845 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.106 [2024-07-14 03:47:49.787067] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:31.106 [2024-07-14 03:47:49.795175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:31.106 [2024-07-14 03:47:49.795205] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6fa1f6c000 00:15:31.106 [2024-07-14 03:47:49.796194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.797197] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.798204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.799223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.800225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.801238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.802262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.803254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.106 [2024-07-14 03:47:49.804260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:31.106 [2024-07-14 03:47:49.804281] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6fa0d20000 00:15:31.106 [2024-07-14 03:47:49.805394] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:31.106 [2024-07-14 03:47:49.822211] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:31.106 [2024-07-14 03:47:49.822262] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:31.106 [2024-07-14 03:47:49.824346] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:31.106 [2024-07-14 03:47:49.824401] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:31.106 [2024-07-14 03:47:49.824491] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:31.106 [2024-07-14 03:47:49.824517] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:31.106 [2024-07-14 03:47:49.824527] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:31.106 [2024-07-14 03:47:49.825350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:31.106 [2024-07-14 03:47:49.825372] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:31.106 [2024-07-14 03:47:49.825384] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:31.106 [2024-07-14 03:47:49.826358] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:31.106 [2024-07-14 03:47:49.826378] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:31.106 [2024-07-14 03:47:49.826397] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:31.106 [2024-07-14 03:47:49.827368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:31.106 [2024-07-14 03:47:49.827388] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:31.106 [2024-07-14 03:47:49.828367] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:31.106 [2024-07-14 03:47:49.828387] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:31.106 [2024-07-14 03:47:49.828396] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:31.106 [2024-07-14 03:47:49.828407] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:31.106 [2024-07-14 03:47:49.828516] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:31.106 [2024-07-14 03:47:49.828525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:31.106 [2024-07-14 03:47:49.828533] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:31.106 [2024-07-14 03:47:49.829376] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:31.106 [2024-07-14 03:47:49.830380] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:31.106 [2024-07-14 03:47:49.831384] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:31.106 [2024-07-14 03:47:49.832410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:31.106 [2024-07-14 03:47:49.833395] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:31.106 [2024-07-14 03:47:49.833415] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:31.106 [2024-07-14 03:47:49.833424] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.833448] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:31.106 [2024-07-14 03:47:49.833463] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.833482] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.106 [2024-07-14 03:47:49.833492] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.106 [2024-07-14 03:47:49.833511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.106 [2024-07-14 03:47:49.839881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:31.106 [2024-07-14 03:47:49.839906] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:31.106 [2024-07-14 03:47:49.839920] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:31.106 [2024-07-14 03:47:49.839928] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:31.106 [2024-07-14 03:47:49.839936] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:31.106 [2024-07-14 03:47:49.839945] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:31.106 [2024-07-14 03:47:49.839953] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:31.106 [2024-07-14 03:47:49.839961] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.839979] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.839997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:31.106 [2024-07-14 03:47:49.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:31.106 [2024-07-14 03:47:49.847905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.106 [2024-07-14 03:47:49.847919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.106 [2024-07-14 03:47:49.847932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.106 [2024-07-14 03:47:49.847944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.106 [2024-07-14 03:47:49.847953] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.847968] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.847983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:31.106 [2024-07-14 03:47:49.855878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:31.106 [2024-07-14 03:47:49.855897] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:31.106 [2024-07-14 03:47:49.855907] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.855918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.855932] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.855947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.106 [2024-07-14 03:47:49.863882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:31.106 [2024-07-14 03:47:49.863954] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.863969] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.863986] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:31.106 [2024-07-14 03:47:49.863995] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:31.106 [2024-07-14 03:47:49.864005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:31.106 [2024-07-14 03:47:49.871878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:31.106 [2024-07-14 03:47:49.871907] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:31.106 [2024-07-14 03:47:49.871926] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.871940] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:31.106 [2024-07-14 03:47:49.871953] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.106 [2024-07-14 03:47:49.871961] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.107 [2024-07-14 03:47:49.871971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.879879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.879907] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.879923] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.879936] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.107 [2024-07-14 03:47:49.879944] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.107 [2024-07-14 03:47:49.879954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.887893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.887915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.887928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.887943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.887954] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.887962] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.887971] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:31.107 [2024-07-14 03:47:49.887979] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:31.107 [2024-07-14 03:47:49.887987] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:31.107 [2024-07-14 03:47:49.888012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.895894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.895920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.903875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.903899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.911879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.911903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.919890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.919918] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:31.107 [2024-07-14 03:47:49.919929] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:31.107 [2024-07-14 03:47:49.919935] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:31.107 [2024-07-14 03:47:49.919941] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:31.107 [2024-07-14 03:47:49.919951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:31.107 [2024-07-14 03:47:49.919963] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:31.107 [2024-07-14 03:47:49.919971] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:31.107 [2024-07-14 03:47:49.919980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.919990] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:31.107 [2024-07-14 03:47:49.919998] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.107 [2024-07-14 03:47:49.920007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.920019] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:31.107 [2024-07-14 03:47:49.920027] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:31.107 [2024-07-14 03:47:49.920036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:31.107 [2024-07-14 03:47:49.927892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.927923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.927938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:31.107 [2024-07-14 03:47:49.927950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:31.107 ===================================================== 00:15:31.107 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.107 ===================================================== 00:15:31.107 Controller Capabilities/Features 00:15:31.107 ================================ 00:15:31.107 Vendor ID: 4e58 00:15:31.107 Subsystem Vendor ID: 4e58 00:15:31.107 Serial Number: SPDK2 00:15:31.107 Model Number: SPDK bdev Controller 00:15:31.107 Firmware Version: 24.01.1 00:15:31.107 Recommended Arb Burst: 6 00:15:31.107 IEEE OUI Identifier: 8d 6b 50 00:15:31.107 Multi-path I/O 00:15:31.107 May have multiple subsystem ports: Yes 00:15:31.107 May have multiple controllers: Yes 00:15:31.107 Associated with SR-IOV VF: No 00:15:31.107 Max Data Transfer Size: 131072 00:15:31.107 Max Number of Namespaces: 32 00:15:31.107 Max Number of I/O Queues: 127 00:15:31.107 NVMe Specification Version (VS): 1.3 00:15:31.107 NVMe Specification Version (Identify): 1.3 00:15:31.107 Maximum Queue Entries: 256 00:15:31.107 Contiguous Queues Required: Yes 00:15:31.107 Arbitration Mechanisms Supported 00:15:31.107 Weighted Round Robin: Not Supported 00:15:31.107 Vendor Specific: Not Supported 00:15:31.107 Reset Timeout: 15000 ms 00:15:31.107 Doorbell Stride: 4 bytes 00:15:31.107 NVM Subsystem Reset: Not Supported 00:15:31.107 Command Sets Supported 00:15:31.107 NVM Command Set: Supported 00:15:31.107 Boot Partition: Not Supported 00:15:31.107 Memory Page Size Minimum: 4096 bytes 00:15:31.107 Memory Page Size Maximum: 4096 bytes 00:15:31.107 Persistent Memory Region: Not Supported 00:15:31.107 Optional Asynchronous Events Supported 00:15:31.107 Namespace Attribute Notices: Supported 00:15:31.107 Firmware Activation Notices: Not Supported 00:15:31.107 ANA Change Notices: Not Supported 00:15:31.107 PLE Aggregate Log Change Notices: Not Supported 00:15:31.107 LBA Status Info Alert Notices: Not Supported 00:15:31.107 EGE Aggregate Log Change Notices: Not Supported 00:15:31.107 Normal NVM Subsystem Shutdown event: Not Supported 00:15:31.107 Zone Descriptor Change Notices: Not Supported 00:15:31.107 Discovery Log Change Notices: Not Supported 00:15:31.107 Controller Attributes 00:15:31.107 128-bit Host Identifier: Supported 00:15:31.107 Non-Operational Permissive Mode: Not Supported 00:15:31.107 NVM Sets: Not Supported 00:15:31.107 Read Recovery Levels: Not Supported 00:15:31.107 Endurance Groups: Not Supported 00:15:31.107 Predictable Latency Mode: Not Supported 00:15:31.107 Traffic Based Keep ALive: Not Supported 00:15:31.107 Namespace Granularity: Not Supported 00:15:31.107 SQ Associations: Not Supported 00:15:31.107 UUID List: Not Supported 00:15:31.107 Multi-Domain Subsystem: Not Supported 00:15:31.107 Fixed Capacity Management: Not Supported 00:15:31.107 Variable Capacity Management: Not Supported 00:15:31.107 Delete Endurance Group: Not Supported 00:15:31.107 Delete NVM Set: Not Supported 00:15:31.107 Extended LBA Formats Supported: Not Supported 00:15:31.107 Flexible Data Placement Supported: Not Supported 00:15:31.107 00:15:31.107 Controller Memory Buffer Support 00:15:31.107 ================================ 00:15:31.107 Supported: No 00:15:31.107 00:15:31.107 Persistent Memory Region Support 00:15:31.107 ================================ 00:15:31.107 Supported: No 00:15:31.107 00:15:31.107 Admin Command Set Attributes 00:15:31.107 ============================ 00:15:31.107 Security Send/Receive: Not Supported 00:15:31.107 Format NVM: Not Supported 00:15:31.107 Firmware Activate/Download: Not Supported 00:15:31.107 Namespace Management: Not Supported 00:15:31.107 Device Self-Test: Not Supported 00:15:31.107 Directives: Not Supported 00:15:31.107 NVMe-MI: Not Supported 00:15:31.107 Virtualization Management: Not Supported 00:15:31.107 Doorbell Buffer Config: Not Supported 00:15:31.107 Get LBA Status Capability: Not Supported 00:15:31.107 Command & Feature Lockdown Capability: Not Supported 00:15:31.107 Abort Command Limit: 4 00:15:31.107 Async Event Request Limit: 4 00:15:31.107 Number of Firmware Slots: N/A 00:15:31.107 Firmware Slot 1 Read-Only: N/A 00:15:31.107 Firmware Activation Without Reset: N/A 00:15:31.107 Multiple Update Detection Support: N/A 00:15:31.107 Firmware Update Granularity: No Information Provided 00:15:31.107 Per-Namespace SMART Log: No 00:15:31.107 Asymmetric Namespace Access Log Page: Not Supported 00:15:31.107 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:31.107 Command Effects Log Page: Supported 00:15:31.107 Get Log Page Extended Data: Supported 00:15:31.107 Telemetry Log Pages: Not Supported 00:15:31.107 Persistent Event Log Pages: Not Supported 00:15:31.107 Supported Log Pages Log Page: May Support 00:15:31.107 Commands Supported & Effects Log Page: Not Supported 00:15:31.107 Feature Identifiers & Effects Log Page:May Support 00:15:31.107 NVMe-MI Commands & Effects Log Page: May Support 00:15:31.108 Data Area 4 for Telemetry Log: Not Supported 00:15:31.108 Error Log Page Entries Supported: 128 00:15:31.108 Keep Alive: Supported 00:15:31.108 Keep Alive Granularity: 10000 ms 00:15:31.108 00:15:31.108 NVM Command Set Attributes 00:15:31.108 ========================== 00:15:31.108 Submission Queue Entry Size 00:15:31.108 Max: 64 00:15:31.108 Min: 64 00:15:31.108 Completion Queue Entry Size 00:15:31.108 Max: 16 00:15:31.108 Min: 16 00:15:31.108 Number of Namespaces: 32 00:15:31.108 Compare Command: Supported 00:15:31.108 Write Uncorrectable Command: Not Supported 00:15:31.108 Dataset Management Command: Supported 00:15:31.108 Write Zeroes Command: Supported 00:15:31.108 Set Features Save Field: Not Supported 00:15:31.108 Reservations: Not Supported 00:15:31.108 Timestamp: Not Supported 00:15:31.108 Copy: Supported 00:15:31.108 Volatile Write Cache: Present 00:15:31.108 Atomic Write Unit (Normal): 1 00:15:31.108 Atomic Write Unit (PFail): 1 00:15:31.108 Atomic Compare & Write Unit: 1 00:15:31.108 Fused Compare & Write: Supported 00:15:31.108 Scatter-Gather List 00:15:31.108 SGL Command Set: Supported (Dword aligned) 00:15:31.108 SGL Keyed: Not Supported 00:15:31.108 SGL Bit Bucket Descriptor: Not Supported 00:15:31.108 SGL Metadata Pointer: Not Supported 00:15:31.108 Oversized SGL: Not Supported 00:15:31.108 SGL Metadata Address: Not Supported 00:15:31.108 SGL Offset: Not Supported 00:15:31.108 Transport SGL Data Block: Not Supported 00:15:31.108 Replay Protected Memory Block: Not Supported 00:15:31.108 00:15:31.108 Firmware Slot Information 00:15:31.108 ========================= 00:15:31.108 Active slot: 1 00:15:31.108 Slot 1 Firmware Revision: 24.01.1 00:15:31.108 00:15:31.108 00:15:31.108 Commands Supported and Effects 00:15:31.108 ============================== 00:15:31.108 Admin Commands 00:15:31.108 -------------- 00:15:31.108 Get Log Page (02h): Supported 00:15:31.108 Identify (06h): Supported 00:15:31.108 Abort (08h): Supported 00:15:31.108 Set Features (09h): Supported 00:15:31.108 Get Features (0Ah): Supported 00:15:31.108 Asynchronous Event Request (0Ch): Supported 00:15:31.108 Keep Alive (18h): Supported 00:15:31.108 I/O Commands 00:15:31.108 ------------ 00:15:31.108 Flush (00h): Supported LBA-Change 00:15:31.108 Write (01h): Supported LBA-Change 00:15:31.108 Read (02h): Supported 00:15:31.108 Compare (05h): Supported 00:15:31.108 Write Zeroes (08h): Supported LBA-Change 00:15:31.108 Dataset Management (09h): Supported LBA-Change 00:15:31.108 Copy (19h): Supported LBA-Change 00:15:31.108 Unknown (79h): Supported LBA-Change 00:15:31.108 Unknown (7Ah): Supported 00:15:31.108 00:15:31.108 Error Log 00:15:31.108 ========= 00:15:31.108 00:15:31.108 Arbitration 00:15:31.108 =========== 00:15:31.108 Arbitration Burst: 1 00:15:31.108 00:15:31.108 Power Management 00:15:31.108 ================ 00:15:31.108 Number of Power States: 1 00:15:31.108 Current Power State: Power State #0 00:15:31.108 Power State #0: 00:15:31.108 Max Power: 0.00 W 00:15:31.108 Non-Operational State: Operational 00:15:31.108 Entry Latency: Not Reported 00:15:31.108 Exit Latency: Not Reported 00:15:31.108 Relative Read Throughput: 0 00:15:31.108 Relative Read Latency: 0 00:15:31.108 Relative Write Throughput: 0 00:15:31.108 Relative Write Latency: 0 00:15:31.108 Idle Power: Not Reported 00:15:31.108 Active Power: Not Reported 00:15:31.108 Non-Operational Permissive Mode: Not Supported 00:15:31.108 00:15:31.108 Health Information 00:15:31.108 ================== 00:15:31.108 Critical Warnings: 00:15:31.108 Available Spare Space: OK 00:15:31.108 Temperature: OK 00:15:31.108 Device Reliability: OK 00:15:31.108 Read Only: No 00:15:31.108 Volatile Memory Backup: OK 00:15:31.108 Current Temperature: 0 Kelvin[2024-07-14 03:47:49.928069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:31.108 [2024-07-14 03:47:49.935894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:31.108 [2024-07-14 03:47:49.935938] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:31.108 [2024-07-14 03:47:49.935960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.108 [2024-07-14 03:47:49.935972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.108 [2024-07-14 03:47:49.935982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.108 [2024-07-14 03:47:49.935992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.108 [2024-07-14 03:47:49.939891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:31.108 [2024-07-14 03:47:49.939913] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:31.108 [2024-07-14 03:47:49.940146] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:31.108 [2024-07-14 03:47:49.940161] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:31.108 [2024-07-14 03:47:49.941115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:31.108 [2024-07-14 03:47:49.941139] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:31.108 [2024-07-14 03:47:49.941203] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:31.108 [2024-07-14 03:47:49.942529] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:31.108 (-273 Celsius) 00:15:31.108 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:31.108 Available Spare: 0% 00:15:31.108 Available Spare Threshold: 0% 00:15:31.108 Life Percentage Used: 0% 00:15:31.108 Data Units Read: 0 00:15:31.108 Data Units Written: 0 00:15:31.108 Host Read Commands: 0 00:15:31.108 Host Write Commands: 0 00:15:31.108 Controller Busy Time: 0 minutes 00:15:31.108 Power Cycles: 0 00:15:31.108 Power On Hours: 0 hours 00:15:31.108 Unsafe Shutdowns: 0 00:15:31.108 Unrecoverable Media Errors: 0 00:15:31.108 Lifetime Error Log Entries: 0 00:15:31.108 Warning Temperature Time: 0 minutes 00:15:31.108 Critical Temperature Time: 0 minutes 00:15:31.108 00:15:31.108 Number of Queues 00:15:31.108 ================ 00:15:31.108 Number of I/O Submission Queues: 127 00:15:31.108 Number of I/O Completion Queues: 127 00:15:31.108 00:15:31.108 Active Namespaces 00:15:31.108 ================= 00:15:31.108 Namespace ID:1 00:15:31.108 Error Recovery Timeout: Unlimited 00:15:31.108 Command Set Identifier: NVM (00h) 00:15:31.108 Deallocate: Supported 00:15:31.108 Deallocated/Unwritten Error: Not Supported 00:15:31.108 Deallocated Read Value: Unknown 00:15:31.108 Deallocate in Write Zeroes: Not Supported 00:15:31.108 Deallocated Guard Field: 0xFFFF 00:15:31.108 Flush: Supported 00:15:31.108 Reservation: Supported 00:15:31.108 Namespace Sharing Capabilities: Multiple Controllers 00:15:31.108 Size (in LBAs): 131072 (0GiB) 00:15:31.108 Capacity (in LBAs): 131072 (0GiB) 00:15:31.108 Utilization (in LBAs): 131072 (0GiB) 00:15:31.108 NGUID: AB1E232E0660446498BACF2C10077257 00:15:31.108 UUID: ab1e232e-0660-4464-98ba-cf2c10077257 00:15:31.108 Thin Provisioning: Not Supported 00:15:31.108 Per-NS Atomic Units: Yes 00:15:31.108 Atomic Boundary Size (Normal): 0 00:15:31.108 Atomic Boundary Size (PFail): 0 00:15:31.108 Atomic Boundary Offset: 0 00:15:31.108 Maximum Single Source Range Length: 65535 00:15:31.108 Maximum Copy Length: 65535 00:15:31.108 Maximum Source Range Count: 1 00:15:31.108 NGUID/EUI64 Never Reused: No 00:15:31.108 Namespace Write Protected: No 00:15:31.108 Number of LBA Formats: 1 00:15:31.108 Current LBA Format: LBA Format #00 00:15:31.108 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:31.108 00:15:31.108 03:47:49 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:31.108 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.386 Initializing NVMe Controllers 00:15:36.386 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:36.386 Initialization complete. Launching workers. 00:15:36.386 ======================================================== 00:15:36.386 Latency(us) 00:15:36.386 Device Information : IOPS MiB/s Average min max 00:15:36.386 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37105.66 144.94 3448.85 1151.81 6647.24 00:15:36.386 ======================================================== 00:15:36.386 Total : 37105.66 144.94 3448.85 1151.81 6647.24 00:15:36.386 00:15:36.387 03:47:55 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:36.647 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.917 Initializing NVMe Controllers 00:15:41.917 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:41.917 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:41.917 Initialization complete. Launching workers. 00:15:41.917 ======================================================== 00:15:41.917 Latency(us) 00:15:41.917 Device Information : IOPS MiB/s Average min max 00:15:41.917 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35708.14 139.48 3583.94 1152.84 9517.62 00:15:41.917 ======================================================== 00:15:41.917 Total : 35708.14 139.48 3583.94 1152.84 9517.62 00:15:41.917 00:15:41.917 03:48:00 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:41.917 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.182 Initializing NVMe Controllers 00:15:47.182 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:47.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:47.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:47.182 Initialization complete. Launching workers. 00:15:47.182 Starting thread on core 2 00:15:47.182 Starting thread on core 3 00:15:47.182 Starting thread on core 1 00:15:47.182 03:48:05 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:47.182 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.471 Initializing NVMe Controllers 00:15:50.471 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.471 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:50.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:50.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:50.471 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:50.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:50.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:50.471 Initialization complete. Launching workers. 00:15:50.471 Starting thread on core 1 with urgent priority queue 00:15:50.471 Starting thread on core 2 with urgent priority queue 00:15:50.471 Starting thread on core 3 with urgent priority queue 00:15:50.471 Starting thread on core 0 with urgent priority queue 00:15:50.471 SPDK bdev Controller (SPDK2 ) core 0: 5541.67 IO/s 18.05 secs/100000 ios 00:15:50.471 SPDK bdev Controller (SPDK2 ) core 1: 5913.67 IO/s 16.91 secs/100000 ios 00:15:50.471 SPDK bdev Controller (SPDK2 ) core 2: 5599.00 IO/s 17.86 secs/100000 ios 00:15:50.471 SPDK bdev Controller (SPDK2 ) core 3: 5603.67 IO/s 17.85 secs/100000 ios 00:15:50.471 ======================================================== 00:15:50.471 00:15:50.471 03:48:09 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:50.471 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.729 Initializing NVMe Controllers 00:15:50.729 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.729 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.729 Namespace ID: 1 size: 0GB 00:15:50.729 Initialization complete. 00:15:50.729 INFO: using host memory buffer for IO 00:15:50.729 Hello world! 00:15:50.729 03:48:09 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:50.729 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.107 Initializing NVMe Controllers 00:15:52.107 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.107 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.107 Initialization complete. Launching workers. 00:15:52.107 submit (in ns) avg, min, max = 8888.8, 3447.8, 4022997.8 00:15:52.107 complete (in ns) avg, min, max = 25590.6, 2040.0, 4997788.9 00:15:52.107 00:15:52.107 Submit histogram 00:15:52.107 ================ 00:15:52.107 Range in us Cumulative Count 00:15:52.107 3.437 - 3.461: 0.1681% ( 23) 00:15:52.107 3.461 - 3.484: 1.2791% ( 152) 00:15:52.107 3.484 - 3.508: 3.5670% ( 313) 00:15:52.107 3.508 - 3.532: 8.6324% ( 693) 00:15:52.107 3.532 - 3.556: 16.0441% ( 1014) 00:15:52.107 3.556 - 3.579: 25.9996% ( 1362) 00:15:52.107 3.579 - 3.603: 34.5589% ( 1171) 00:15:52.107 3.603 - 3.627: 41.4297% ( 940) 00:15:52.107 3.627 - 3.650: 47.2772% ( 800) 00:15:52.107 3.650 - 3.674: 52.0284% ( 650) 00:15:52.107 3.674 - 3.698: 55.8877% ( 528) 00:15:52.107 3.698 - 3.721: 59.7544% ( 529) 00:15:52.107 3.721 - 3.745: 62.8170% ( 419) 00:15:52.107 3.745 - 3.769: 66.4352% ( 495) 00:15:52.107 3.769 - 3.793: 70.6454% ( 576) 00:15:52.107 3.793 - 3.816: 74.7972% ( 568) 00:15:52.107 3.816 - 3.840: 78.7808% ( 545) 00:15:52.107 3.840 - 3.864: 81.9385% ( 432) 00:15:52.107 3.864 - 3.887: 84.2117% ( 311) 00:15:52.107 3.887 - 3.911: 86.1341% ( 263) 00:15:52.107 3.911 - 3.935: 87.6690% ( 210) 00:15:52.107 3.935 - 3.959: 88.7947% ( 154) 00:15:52.107 3.959 - 3.982: 89.8692% ( 147) 00:15:52.107 3.982 - 4.006: 91.0094% ( 156) 00:15:52.107 4.006 - 4.030: 91.8427% ( 114) 00:15:52.107 4.030 - 4.053: 92.7637% ( 126) 00:15:52.107 4.053 - 4.077: 93.3192% ( 76) 00:15:52.107 4.077 - 4.101: 93.9113% ( 81) 00:15:52.107 4.101 - 4.124: 94.4083% ( 68) 00:15:52.107 4.124 - 4.148: 94.8396% ( 59) 00:15:52.107 4.148 - 4.172: 95.1758% ( 46) 00:15:52.107 4.172 - 4.196: 95.4024% ( 31) 00:15:52.107 4.196 - 4.219: 95.6874% ( 39) 00:15:52.107 4.219 - 4.243: 95.8702% ( 25) 00:15:52.107 4.243 - 4.267: 96.0675% ( 27) 00:15:52.107 4.267 - 4.290: 96.1479% ( 11) 00:15:52.107 4.290 - 4.314: 96.3087% ( 22) 00:15:52.107 4.314 - 4.338: 96.4184% ( 15) 00:15:52.107 4.338 - 4.361: 96.5061% ( 12) 00:15:52.107 4.361 - 4.385: 96.6450% ( 19) 00:15:52.107 4.385 - 4.409: 96.6669% ( 3) 00:15:52.107 4.409 - 4.433: 96.7473% ( 11) 00:15:52.107 4.433 - 4.456: 96.8204% ( 10) 00:15:52.107 4.456 - 4.480: 96.8789% ( 8) 00:15:52.107 4.480 - 4.504: 96.9154% ( 5) 00:15:52.107 4.504 - 4.527: 96.9520% ( 5) 00:15:52.107 4.527 - 4.551: 96.9739% ( 3) 00:15:52.107 4.551 - 4.575: 96.9885% ( 2) 00:15:52.107 4.575 - 4.599: 97.0324% ( 6) 00:15:52.107 4.599 - 4.622: 97.0470% ( 2) 00:15:52.107 4.622 - 4.646: 97.0689% ( 3) 00:15:52.107 4.646 - 4.670: 97.0762% ( 1) 00:15:52.107 4.670 - 4.693: 97.0835% ( 1) 00:15:52.107 4.693 - 4.717: 97.0982% ( 2) 00:15:52.107 4.717 - 4.741: 97.1201% ( 3) 00:15:52.107 4.741 - 4.764: 97.1274% ( 1) 00:15:52.107 4.764 - 4.788: 97.1347% ( 1) 00:15:52.107 4.788 - 4.812: 97.1566% ( 3) 00:15:52.107 4.812 - 4.836: 97.1859% ( 4) 00:15:52.107 4.836 - 4.859: 97.2297% ( 6) 00:15:52.107 4.859 - 4.883: 97.2517% ( 3) 00:15:52.107 4.883 - 4.907: 97.2882% ( 5) 00:15:52.107 4.907 - 4.930: 97.3248% ( 5) 00:15:52.107 4.930 - 4.954: 97.3832% ( 8) 00:15:52.107 4.954 - 4.978: 97.4344% ( 7) 00:15:52.108 4.978 - 5.001: 97.4709% ( 5) 00:15:52.108 5.001 - 5.025: 97.4783% ( 1) 00:15:52.108 5.025 - 5.049: 97.5367% ( 8) 00:15:52.108 5.049 - 5.073: 97.5879% ( 7) 00:15:52.108 5.073 - 5.096: 97.6391% ( 7) 00:15:52.108 5.096 - 5.120: 97.6829% ( 6) 00:15:52.108 5.120 - 5.144: 97.7268% ( 6) 00:15:52.108 5.144 - 5.167: 97.7706% ( 6) 00:15:52.108 5.167 - 5.191: 97.8218% ( 7) 00:15:52.108 5.215 - 5.239: 97.8364% ( 2) 00:15:52.108 5.239 - 5.262: 97.8583% ( 3) 00:15:52.108 5.262 - 5.286: 97.8730% ( 2) 00:15:52.108 5.286 - 5.310: 97.8803% ( 1) 00:15:52.108 5.333 - 5.357: 97.8949% ( 2) 00:15:52.108 5.357 - 5.381: 97.9095% ( 2) 00:15:52.108 5.381 - 5.404: 97.9314% ( 3) 00:15:52.108 5.404 - 5.428: 97.9387% ( 1) 00:15:52.108 5.428 - 5.452: 97.9534% ( 2) 00:15:52.108 5.452 - 5.476: 97.9607% ( 1) 00:15:52.108 5.476 - 5.499: 97.9680% ( 1) 00:15:52.108 5.499 - 5.523: 97.9753% ( 1) 00:15:52.108 5.594 - 5.618: 97.9899% ( 2) 00:15:52.108 5.641 - 5.665: 98.0045% ( 2) 00:15:52.108 5.689 - 5.713: 98.0192% ( 2) 00:15:52.108 5.713 - 5.736: 98.0338% ( 2) 00:15:52.108 5.736 - 5.760: 98.0411% ( 1) 00:15:52.108 5.760 - 5.784: 98.0557% ( 2) 00:15:52.108 5.784 - 5.807: 98.0630% ( 1) 00:15:52.108 5.807 - 5.831: 98.0703% ( 1) 00:15:52.108 5.831 - 5.855: 98.0776% ( 1) 00:15:52.108 5.855 - 5.879: 98.0922% ( 2) 00:15:52.108 5.879 - 5.902: 98.1069% ( 2) 00:15:52.108 5.902 - 5.926: 98.1215% ( 2) 00:15:52.108 5.950 - 5.973: 98.1288% ( 1) 00:15:52.108 5.973 - 5.997: 98.1361% ( 1) 00:15:52.108 6.021 - 6.044: 98.1507% ( 2) 00:15:52.108 6.044 - 6.068: 98.1580% ( 1) 00:15:52.108 6.116 - 6.163: 98.1653% ( 1) 00:15:52.108 6.210 - 6.258: 98.1726% ( 1) 00:15:52.108 6.637 - 6.684: 98.1800% ( 1) 00:15:52.108 6.684 - 6.732: 98.1946% ( 2) 00:15:52.108 6.969 - 7.016: 98.2019% ( 1) 00:15:52.108 7.016 - 7.064: 98.2092% ( 1) 00:15:52.108 7.064 - 7.111: 98.2165% ( 1) 00:15:52.108 7.159 - 7.206: 98.2238% ( 1) 00:15:52.108 7.301 - 7.348: 98.2311% ( 1) 00:15:52.108 7.348 - 7.396: 98.2384% ( 1) 00:15:52.108 7.443 - 7.490: 98.2457% ( 1) 00:15:52.108 7.775 - 7.822: 98.2531% ( 1) 00:15:52.108 7.870 - 7.917: 98.2677% ( 2) 00:15:52.108 7.964 - 8.012: 98.2750% ( 1) 00:15:52.108 8.012 - 8.059: 98.2896% ( 2) 00:15:52.108 8.107 - 8.154: 98.3261% ( 5) 00:15:52.108 8.154 - 8.201: 98.3335% ( 1) 00:15:52.108 8.296 - 8.344: 98.3408% ( 1) 00:15:52.108 8.391 - 8.439: 98.3627% ( 3) 00:15:52.108 8.439 - 8.486: 98.3700% ( 1) 00:15:52.108 8.486 - 8.533: 98.3773% ( 1) 00:15:52.108 8.533 - 8.581: 98.3919% ( 2) 00:15:52.108 8.581 - 8.628: 98.4065% ( 2) 00:15:52.108 8.723 - 8.770: 98.4139% ( 1) 00:15:52.108 8.770 - 8.818: 98.4212% ( 1) 00:15:52.108 8.818 - 8.865: 98.4431% ( 3) 00:15:52.108 8.865 - 8.913: 98.4504% ( 1) 00:15:52.108 8.913 - 8.960: 98.4577% ( 1) 00:15:52.108 8.960 - 9.007: 98.4650% ( 1) 00:15:52.108 9.007 - 9.055: 98.4723% ( 1) 00:15:52.108 9.055 - 9.102: 98.4870% ( 2) 00:15:52.108 9.150 - 9.197: 98.4943% ( 1) 00:15:52.108 9.244 - 9.292: 98.5089% ( 2) 00:15:52.108 9.292 - 9.339: 98.5235% ( 2) 00:15:52.108 9.339 - 9.387: 98.5308% ( 1) 00:15:52.108 9.434 - 9.481: 98.5527% ( 3) 00:15:52.108 9.624 - 9.671: 98.5600% ( 1) 00:15:52.108 9.671 - 9.719: 98.5674% ( 1) 00:15:52.108 9.719 - 9.766: 98.5747% ( 1) 00:15:52.108 9.766 - 9.813: 98.5820% ( 1) 00:15:52.108 9.813 - 9.861: 98.5966% ( 2) 00:15:52.108 9.861 - 9.908: 98.6039% ( 1) 00:15:52.108 9.956 - 10.003: 98.6112% ( 1) 00:15:52.108 10.050 - 10.098: 98.6185% ( 1) 00:15:52.108 10.098 - 10.145: 98.6331% ( 2) 00:15:52.108 10.145 - 10.193: 98.6405% ( 1) 00:15:52.108 10.193 - 10.240: 98.6551% ( 2) 00:15:52.108 10.240 - 10.287: 98.6697% ( 2) 00:15:52.108 10.287 - 10.335: 98.6770% ( 1) 00:15:52.108 10.382 - 10.430: 98.6843% ( 1) 00:15:52.108 10.667 - 10.714: 98.6989% ( 2) 00:15:52.108 10.714 - 10.761: 98.7062% ( 1) 00:15:52.108 10.761 - 10.809: 98.7135% ( 1) 00:15:52.108 10.809 - 10.856: 98.7209% ( 1) 00:15:52.108 10.856 - 10.904: 98.7282% ( 1) 00:15:52.108 10.951 - 10.999: 98.7428% ( 2) 00:15:52.108 11.046 - 11.093: 98.7501% ( 1) 00:15:52.108 11.283 - 11.330: 98.7574% ( 1) 00:15:52.108 12.041 - 12.089: 98.7647% ( 1) 00:15:52.108 12.136 - 12.231: 98.7720% ( 1) 00:15:52.108 12.326 - 12.421: 98.7793% ( 1) 00:15:52.108 12.516 - 12.610: 98.7866% ( 1) 00:15:52.108 12.705 - 12.800: 98.7939% ( 1) 00:15:52.108 12.990 - 13.084: 98.8013% ( 1) 00:15:52.108 13.084 - 13.179: 98.8086% ( 1) 00:15:52.108 13.179 - 13.274: 98.8159% ( 1) 00:15:52.108 13.559 - 13.653: 98.8232% ( 1) 00:15:52.108 13.653 - 13.748: 98.8305% ( 1) 00:15:52.108 13.748 - 13.843: 98.8378% ( 1) 00:15:52.108 13.938 - 14.033: 98.8451% ( 1) 00:15:52.108 14.127 - 14.222: 98.8524% ( 1) 00:15:52.108 14.222 - 14.317: 98.8597% ( 1) 00:15:52.108 14.601 - 14.696: 98.8670% ( 1) 00:15:52.108 15.265 - 15.360: 98.8744% ( 1) 00:15:52.108 15.360 - 15.455: 98.8817% ( 1) 00:15:52.108 16.782 - 16.877: 98.8890% ( 1) 00:15:52.108 17.067 - 17.161: 98.8963% ( 1) 00:15:52.108 17.161 - 17.256: 98.9182% ( 3) 00:15:52.108 17.256 - 17.351: 98.9328% ( 2) 00:15:52.108 17.351 - 17.446: 98.9401% ( 1) 00:15:52.108 17.446 - 17.541: 98.9474% ( 1) 00:15:52.108 17.541 - 17.636: 98.9767% ( 4) 00:15:52.108 17.636 - 17.730: 99.0278% ( 7) 00:15:52.108 17.730 - 17.825: 99.0644% ( 5) 00:15:52.108 17.825 - 17.920: 99.0936% ( 4) 00:15:52.108 17.920 - 18.015: 99.1302% ( 5) 00:15:52.108 18.015 - 18.110: 99.2033% ( 10) 00:15:52.108 18.110 - 18.204: 99.3056% ( 14) 00:15:52.108 18.204 - 18.299: 99.3714% ( 9) 00:15:52.108 18.299 - 18.394: 99.4226% ( 7) 00:15:52.108 18.394 - 18.489: 99.4664% ( 6) 00:15:52.108 18.489 - 18.584: 99.5541% ( 12) 00:15:52.108 18.584 - 18.679: 99.6272% ( 10) 00:15:52.108 18.679 - 18.773: 99.6491% ( 3) 00:15:52.108 18.773 - 18.868: 99.6857% ( 5) 00:15:52.108 18.868 - 18.963: 99.7222% ( 5) 00:15:52.108 18.963 - 19.058: 99.7369% ( 2) 00:15:52.108 19.058 - 19.153: 99.7588% ( 3) 00:15:52.108 19.153 - 19.247: 99.7734% ( 2) 00:15:52.108 19.247 - 19.342: 99.7880% ( 2) 00:15:52.108 19.342 - 19.437: 99.7953% ( 1) 00:15:52.108 19.532 - 19.627: 99.8026% ( 1) 00:15:52.108 19.627 - 19.721: 99.8100% ( 1) 00:15:52.108 19.721 - 19.816: 99.8173% ( 1) 00:15:52.108 20.006 - 20.101: 99.8319% ( 2) 00:15:52.108 20.196 - 20.290: 99.8392% ( 1) 00:15:52.108 20.480 - 20.575: 99.8465% ( 1) 00:15:52.108 21.049 - 21.144: 99.8538% ( 1) 00:15:52.108 21.807 - 21.902: 99.8611% ( 1) 00:15:52.108 22.850 - 22.945: 99.8684% ( 1) 00:15:52.108 23.514 - 23.609: 99.8757% ( 1) 00:15:52.108 3980.705 - 4004.978: 99.9415% ( 9) 00:15:52.108 4004.978 - 4029.250: 100.0000% ( 8) 00:15:52.108 00:15:52.108 Complete histogram 00:15:52.108 ================== 00:15:52.108 Range in us Cumulative Count 00:15:52.108 2.039 - 2.050: 3.1577% ( 432) 00:15:52.108 2.050 - 2.062: 20.0497% ( 2311) 00:15:52.108 2.062 - 2.074: 23.7410% ( 505) 00:15:52.108 2.074 - 2.086: 37.4095% ( 1870) 00:15:52.108 2.086 - 2.098: 54.0238% ( 2273) 00:15:52.108 2.098 - 2.110: 56.8818% ( 391) 00:15:52.108 2.110 - 2.121: 61.6548% ( 653) 00:15:52.108 2.121 - 2.133: 65.8212% ( 570) 00:15:52.108 2.133 - 2.145: 67.5389% ( 235) 00:15:52.108 2.145 - 2.157: 75.5793% ( 1100) 00:15:52.108 2.157 - 2.169: 80.1038% ( 619) 00:15:52.108 2.169 - 2.181: 81.2148% ( 152) 00:15:52.108 2.181 - 2.193: 83.9632% ( 376) 00:15:52.108 2.193 - 2.204: 85.8563% ( 259) 00:15:52.108 2.204 - 2.216: 86.9600% ( 151) 00:15:52.108 2.216 - 2.228: 91.0533% ( 560) 00:15:52.108 2.228 - 2.240: 93.1803% ( 291) 00:15:52.108 2.240 - 2.252: 93.8455% ( 91) 00:15:52.108 2.252 - 2.264: 94.3498% ( 69) 00:15:52.108 2.264 - 2.276: 94.6349% ( 39) 00:15:52.108 2.276 - 2.287: 95.0004% ( 50) 00:15:52.108 2.287 - 2.299: 95.3439% ( 47) 00:15:52.108 2.299 - 2.311: 95.4609% ( 16) 00:15:52.108 2.311 - 2.323: 95.5413% ( 11) 00:15:52.108 2.323 - 2.335: 95.6801% ( 19) 00:15:52.108 2.335 - 2.347: 95.8848% ( 28) 00:15:52.108 2.347 - 2.359: 96.1333% ( 34) 00:15:52.108 2.359 - 2.370: 96.4622% ( 45) 00:15:52.108 2.370 - 2.382: 96.7327% ( 37) 00:15:52.108 2.382 - 2.394: 97.0105% ( 38) 00:15:52.108 2.394 - 2.406: 97.2370% ( 31) 00:15:52.108 2.406 - 2.418: 97.3467% ( 15) 00:15:52.108 2.418 - 2.430: 97.4709% ( 17) 00:15:52.108 2.430 - 2.441: 97.6025% ( 18) 00:15:52.108 2.441 - 2.453: 97.6902% ( 12) 00:15:52.108 2.453 - 2.465: 97.7779% ( 12) 00:15:52.108 2.465 - 2.477: 97.9168% ( 19) 00:15:52.108 2.477 - 2.489: 97.9972% ( 11) 00:15:52.108 2.489 - 2.501: 98.0776% ( 11) 00:15:52.108 2.501 - 2.513: 98.1361% ( 8) 00:15:52.108 2.513 - 2.524: 98.1873% ( 7) 00:15:52.108 2.524 - 2.536: 98.2092% ( 3) 00:15:52.108 2.536 - 2.548: 98.2238% ( 2) 00:15:52.108 2.548 - 2.560: 98.2457% ( 3) 00:15:52.109 2.560 - 2.572: 98.2604% ( 2) 00:15:52.109 2.584 - 2.596: 98.2677% ( 1) 00:15:52.109 2.619 - 2.631: 98.2750% ( 1) 00:15:52.109 2.631 - 2.643: 98.2823% ( 1) 00:15:52.109 2.643 - 2.655: 98.2896% ( 1) 00:15:52.109 2.667 - 2.679: 98.2969% ( 1) 00:15:52.109 2.679 - 2.690: 98.3042% ( 1) 00:15:52.109 2.690 - 2.702: 98.3115% ( 1) 00:15:52.109 2.726 - 2.738: 98.3188% ( 1) 00:15:52.109 2.738 - 2.750: 98.3261% ( 1) 00:15:52.109 2.750 - 2.761: 98.3335% ( 1) 00:15:52.109 2.761 - 2.773: 98.3408% ( 1) 00:15:52.109 2.797 - 2.809: 98.3481% ( 1) 00:15:52.109 2.844 - 2.856: 98.3554% ( 1) 00:15:52.109 2.904 - 2.916: 98.3627% ( 1) 00:15:52.109 2.916 - 2.927: 98.3700% ( 1) 00:15:52.109 2.939 - 2.951: 98.3773% ( 1) 00:15:52.109 2.963 - 2.975: 98.3846% ( 1) 00:15:52.109 3.022 - 3.034: 98.3919% ( 1) 00:15:52.109 3.034 - 3.058: 98.4065% ( 2) 00:15:52.109 3.081 - 3.105: 98.4212% ( 2) 00:15:52.109 3.105 - 3.129: 98.4358% ( 2) 00:15:52.109 3.153 - 3.176: 98.4504% ( 2) 00:15:52.109 3.176 - 3.200: 98.4577% ( 1) 00:15:52.109 3.200 - 3.224: 98.4650% ( 1) 00:15:52.109 3.224 - 3.247: 98.4723% ( 1) 00:15:52.109 3.247 - 3.271: 98.4796% ( 1) 00:15:52.109 3.271 - 3.295: 98.4870% ( 1) 00:15:52.109 3.319 - 3.342: 98.4943% ( 1) 00:15:52.109 3.342 - 3.366: 98.5016% ( 1) 00:15:52.109 3.366 - 3.390: 98.5089% ( 1) 00:15:52.109 3.413 - 3.437: 98.5381% ( 4) 00:15:52.109 3.437 - 3.461: 98.5674% ( 4) 00:15:52.109 3.461 - 3.484: 98.5966% ( 4) 00:15:52.109 3.508 - 3.532: 98.6039% ( 1) 00:15:52.109 3.532 - 3.556: 98.6185% ( 2) 00:15:52.109 3.556 - 3.579: 98.6478% ( 4) 00:15:52.109 3.579 - 3.603: 98.6551% ( 1) 00:15:52.109 3.603 - 3.627: 98.6624% ( 1) 00:15:52.109 3.627 - 3.650: 98.6697% ( 1) 00:15:52.109 3.650 - 3.674: 98.6989% ( 4) 00:15:52.109 3.674 - 3.698: 98.7135% ( 2) 00:15:52.109 3.698 - 3.721: 98.7209% ( 1) 00:15:52.109 3.721 - 3.745: 98.7355% ( 2) 00:15:52.109 3.816 - 3.840: 98.7428% ( 1) 00:15:52.109 3.840 - 3.864: 98.7501% ( 1) 00:15:52.109 3.864 - 3.887: 98.7574% ( 1) 00:15:52.109 3.887 - 3.911: 98.7720% ( 2) 00:15:52.109 3.935 - 3.959: 98.7866% ( 2) 00:15:52.109 4.053 - 4.077: 98.7939% ( 1) 00:15:52.109 4.172 - 4.196: 98.8013% ( 1) 00:15:52.109 5.404 - 5.428: 98.8086% ( 1) 00:15:52.109 5.428 - 5.452: 98.8159% ( 1) 00:15:52.109 5.594 - 5.618: 98.8232% ( 1) 00:15:52.109 6.068 - 6.116: 98.8305% ( 1) 00:15:52.109 6.116 - 6.163: 98.8378% ( 1) 00:15:52.109 6.447 - 6.495: 98.8451% ( 1) 00:15:52.109 6.542 - 6.590: 98.8524% ( 1) 00:15:52.109 6.732 - 6.779: 98.8597% ( 1) 00:15:52.109 6.827 - 6.874: 98.8744% ( 2) 00:15:52.109 6.921 - 6.969: 98.8817% ( 1) 00:15:52.109 6.969 - 7.016: 98.8890% ( 1) 00:15:52.109 7.111 - 7.159: 98.8963% ( 1) 00:15:52.109 7.396 - 7.443: 98.9036% ( 1) 00:15:52.109 7.490 - 7.538: 98.9109% ( 1) 00:15:52.109 8.154 - 8.201: 98.9182% ( 1) 00:15:52.109 8.249 - 8.296: 98.9255% ( 1) 00:15:52.109 8.391 - 8.439: 98.9328% ( 1) 00:15:52.109 9.150 - 9.197: 98.9401% ( 1) 00:15:52.109 12.136 - 12.231: 98.9474% ( 1) 00:15:52.109 15.170 - 15.265: 98.9548% ( 1) 00:15:52.109 15.360 - 15.455: 98.9621% ( 1) 00:15:52.109 15.455 - 15.550: 98.9840% ( 3) 00:15:52.109 15.739 - 15.834: 98.9913% ( 1) 00:15:52.109 15.834 - 15.929: 98.9986% ( 1) 00:15:52.109 16.024 - 16.119: 99.0352% ( 5) 00:15:52.109 16.119 - 16.213: 99.0498% ( 2) 00:15:52.109 16.213 - 16.308: 99.0790% ( 4) 00:15:52.109 16.308 - 16.403: 99.1229% ( 6) 00:15:52.109 16.403 - 16.498: 99.1594% ( 5) 00:15:52.109 16.498 - 16.593: 99.1887% ( 4) 00:15:52.109 16.593 - 16.687: 99.1960% ( 1) 00:15:52.109 16.687 - 16.782: 99.2471% ( 7) 00:15:52.109 16.782 - 16.877: 99.2837% ( 5) 00:15:52.109 16.877 - 16.972: 99.3056% ( 3) 00:15:52.109 16.972 - 17.067: 99.3275% ( 3) 00:15:52.109 17.161 - 17.256: 99.3348% ( 1) 00:15:52.109 17.256 - 17.351: 99.3422% ( 1) 00:15:52.109 17.446 - 17.541: 99.3495% ( 1) 00:15:52.109 17.541 - 17.636: 99.3568% ( 1) 00:15:52.109 17.730 - 17.825: 99.3641% ( 1) 00:15:52.109 18.015 - 18.110: 99.3714% ( 1) 00:15:52.109 18.394 - 18.489: 99.3860% ( 2) 00:15:52.109 20.480 - 20.575: 99.3933% ( 1) 00:15:52.109 23.893 - 23.988: 99.4006% ( 1) 00:15:52.109 25.410 - 25.600: 99.4079% ( 1) 00:15:52.109 25.790 - 25.979: 99.4152% ( 1) 00:15:52.109 2597.167 - 2609.304: 99.4226% ( 1) 00:15:52.109 3980.705 - 4004.978: 99.7588% ( 46) 00:15:52.109 4004.978 - 4029.250: 99.9927% ( 32) 00:15:52.109 4975.881 - 5000.154: 100.0000% ( 1) 00:15:52.109 00:15:52.109 03:48:10 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:52.109 03:48:10 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:52.109 03:48:10 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:52.109 03:48:10 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:52.109 03:48:10 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:52.366 [ 00:15:52.366 { 00:15:52.366 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:52.366 "subtype": "Discovery", 00:15:52.366 "listen_addresses": [], 00:15:52.366 "allow_any_host": true, 00:15:52.366 "hosts": [] 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:52.366 "subtype": "NVMe", 00:15:52.366 "listen_addresses": [ 00:15:52.366 { 00:15:52.366 "transport": "VFIOUSER", 00:15:52.366 "trtype": "VFIOUSER", 00:15:52.366 "adrfam": "IPv4", 00:15:52.366 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:52.366 "trsvcid": "0" 00:15:52.366 } 00:15:52.366 ], 00:15:52.366 "allow_any_host": true, 00:15:52.366 "hosts": [], 00:15:52.366 "serial_number": "SPDK1", 00:15:52.366 "model_number": "SPDK bdev Controller", 00:15:52.366 "max_namespaces": 32, 00:15:52.366 "min_cntlid": 1, 00:15:52.366 "max_cntlid": 65519, 00:15:52.366 "namespaces": [ 00:15:52.366 { 00:15:52.366 "nsid": 1, 00:15:52.366 "bdev_name": "Malloc1", 00:15:52.366 "name": "Malloc1", 00:15:52.366 "nguid": "E028DDF3619B4B1DAB5839D160320DC5", 00:15:52.366 "uuid": "e028ddf3-619b-4b1d-ab58-39d160320dc5" 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "nsid": 2, 00:15:52.366 "bdev_name": "Malloc3", 00:15:52.366 "name": "Malloc3", 00:15:52.366 "nguid": "D3320615D1DE4FFAACF1AE3AA50407EE", 00:15:52.366 "uuid": "d3320615-d1de-4ffa-acf1-ae3aa50407ee" 00:15:52.366 } 00:15:52.366 ] 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:52.367 "subtype": "NVMe", 00:15:52.367 "listen_addresses": [ 00:15:52.367 { 00:15:52.367 "transport": "VFIOUSER", 00:15:52.367 "trtype": "VFIOUSER", 00:15:52.367 "adrfam": "IPv4", 00:15:52.367 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:52.367 "trsvcid": "0" 00:15:52.367 } 00:15:52.367 ], 00:15:52.367 "allow_any_host": true, 00:15:52.367 "hosts": [], 00:15:52.367 "serial_number": "SPDK2", 00:15:52.367 "model_number": "SPDK bdev Controller", 00:15:52.367 "max_namespaces": 32, 00:15:52.367 "min_cntlid": 1, 00:15:52.367 "max_cntlid": 65519, 00:15:52.367 "namespaces": [ 00:15:52.367 { 00:15:52.367 "nsid": 1, 00:15:52.367 "bdev_name": "Malloc2", 00:15:52.367 "name": "Malloc2", 00:15:52.367 "nguid": "AB1E232E0660446498BACF2C10077257", 00:15:52.367 "uuid": "ab1e232e-0660-4464-98ba-cf2c10077257" 00:15:52.367 } 00:15:52.367 ] 00:15:52.367 } 00:15:52.367 ] 00:15:52.367 03:48:11 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:52.367 03:48:11 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2359638 00:15:52.367 03:48:11 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:52.367 03:48:11 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:52.367 03:48:11 -- common/autotest_common.sh@1244 -- # local i=0 00:15:52.367 03:48:11 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:52.367 03:48:11 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:52.367 03:48:11 -- common/autotest_common.sh@1255 -- # return 0 00:15:52.367 03:48:11 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:52.367 03:48:11 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:52.367 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.624 Malloc4 00:15:52.625 03:48:11 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:52.882 03:48:11 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:52.883 Asynchronous Event Request test 00:15:52.883 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.883 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.883 Registering asynchronous event callbacks... 00:15:52.883 Starting namespace attribute notice tests for all controllers... 00:15:52.883 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:52.883 aer_cb - Changed Namespace 00:15:52.883 Cleaning up... 00:15:53.142 [ 00:15:53.142 { 00:15:53.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:53.142 "subtype": "Discovery", 00:15:53.142 "listen_addresses": [], 00:15:53.142 "allow_any_host": true, 00:15:53.142 "hosts": [] 00:15:53.142 }, 00:15:53.142 { 00:15:53.142 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:53.142 "subtype": "NVMe", 00:15:53.142 "listen_addresses": [ 00:15:53.142 { 00:15:53.142 "transport": "VFIOUSER", 00:15:53.142 "trtype": "VFIOUSER", 00:15:53.142 "adrfam": "IPv4", 00:15:53.142 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:53.142 "trsvcid": "0" 00:15:53.142 } 00:15:53.142 ], 00:15:53.142 "allow_any_host": true, 00:15:53.142 "hosts": [], 00:15:53.142 "serial_number": "SPDK1", 00:15:53.142 "model_number": "SPDK bdev Controller", 00:15:53.142 "max_namespaces": 32, 00:15:53.142 "min_cntlid": 1, 00:15:53.142 "max_cntlid": 65519, 00:15:53.142 "namespaces": [ 00:15:53.142 { 00:15:53.142 "nsid": 1, 00:15:53.142 "bdev_name": "Malloc1", 00:15:53.142 "name": "Malloc1", 00:15:53.142 "nguid": "E028DDF3619B4B1DAB5839D160320DC5", 00:15:53.142 "uuid": "e028ddf3-619b-4b1d-ab58-39d160320dc5" 00:15:53.142 }, 00:15:53.142 { 00:15:53.142 "nsid": 2, 00:15:53.142 "bdev_name": "Malloc3", 00:15:53.142 "name": "Malloc3", 00:15:53.142 "nguid": "D3320615D1DE4FFAACF1AE3AA50407EE", 00:15:53.142 "uuid": "d3320615-d1de-4ffa-acf1-ae3aa50407ee" 00:15:53.142 } 00:15:53.142 ] 00:15:53.142 }, 00:15:53.142 { 00:15:53.142 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:53.142 "subtype": "NVMe", 00:15:53.142 "listen_addresses": [ 00:15:53.142 { 00:15:53.142 "transport": "VFIOUSER", 00:15:53.142 "trtype": "VFIOUSER", 00:15:53.142 "adrfam": "IPv4", 00:15:53.142 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:53.142 "trsvcid": "0" 00:15:53.142 } 00:15:53.142 ], 00:15:53.142 "allow_any_host": true, 00:15:53.143 "hosts": [], 00:15:53.143 "serial_number": "SPDK2", 00:15:53.143 "model_number": "SPDK bdev Controller", 00:15:53.143 "max_namespaces": 32, 00:15:53.143 "min_cntlid": 1, 00:15:53.143 "max_cntlid": 65519, 00:15:53.143 "namespaces": [ 00:15:53.143 { 00:15:53.143 "nsid": 1, 00:15:53.143 "bdev_name": "Malloc2", 00:15:53.143 "name": "Malloc2", 00:15:53.143 "nguid": "AB1E232E0660446498BACF2C10077257", 00:15:53.143 "uuid": "ab1e232e-0660-4464-98ba-cf2c10077257" 00:15:53.143 }, 00:15:53.143 { 00:15:53.143 "nsid": 2, 00:15:53.143 "bdev_name": "Malloc4", 00:15:53.143 "name": "Malloc4", 00:15:53.143 "nguid": "F4BE6695932C4D03A2A1AF0339180F6E", 00:15:53.143 "uuid": "f4be6695-932c-4d03-a2a1-af0339180f6e" 00:15:53.143 } 00:15:53.143 ] 00:15:53.143 } 00:15:53.143 ] 00:15:53.143 03:48:11 -- target/nvmf_vfio_user.sh@44 -- # wait 2359638 00:15:53.143 03:48:11 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:53.143 03:48:11 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2353252 00:15:53.143 03:48:11 -- common/autotest_common.sh@926 -- # '[' -z 2353252 ']' 00:15:53.143 03:48:11 -- common/autotest_common.sh@930 -- # kill -0 2353252 00:15:53.143 03:48:11 -- common/autotest_common.sh@931 -- # uname 00:15:53.143 03:48:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.143 03:48:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2353252 00:15:53.143 03:48:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.143 03:48:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.143 03:48:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2353252' 00:15:53.143 killing process with pid 2353252 00:15:53.143 03:48:12 -- common/autotest_common.sh@945 -- # kill 2353252 00:15:53.143 [2024-07-14 03:48:12.005999] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:53.143 03:48:12 -- common/autotest_common.sh@950 -- # wait 2353252 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2359789 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2359789' 00:15:53.401 Process pid: 2359789 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:53.401 03:48:12 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2359789 00:15:53.401 03:48:12 -- common/autotest_common.sh@819 -- # '[' -z 2359789 ']' 00:15:53.402 03:48:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.402 03:48:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:53.402 03:48:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.402 03:48:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:53.402 03:48:12 -- common/autotest_common.sh@10 -- # set +x 00:15:53.661 [2024-07-14 03:48:12.366387] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:53.661 [2024-07-14 03:48:12.367380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:53.661 [2024-07-14 03:48:12.367452] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.661 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.661 [2024-07-14 03:48:12.429536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.661 [2024-07-14 03:48:12.514912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.661 [2024-07-14 03:48:12.515067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.661 [2024-07-14 03:48:12.515084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.661 [2024-07-14 03:48:12.515103] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.661 [2024-07-14 03:48:12.515184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.661 [2024-07-14 03:48:12.515248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.661 [2024-07-14 03:48:12.515312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.661 [2024-07-14 03:48:12.515314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.921 [2024-07-14 03:48:12.613181] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:15:53.921 [2024-07-14 03:48:12.613446] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:15:53.921 [2024-07-14 03:48:12.613710] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:15:53.921 [2024-07-14 03:48:12.614479] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:53.921 [2024-07-14 03:48:12.614576] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:15:54.496 03:48:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:54.496 03:48:13 -- common/autotest_common.sh@852 -- # return 0 00:15:54.496 03:48:13 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:55.463 03:48:14 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:55.729 03:48:14 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:55.729 03:48:14 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:55.729 03:48:14 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:55.729 03:48:14 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:55.729 03:48:14 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:55.987 Malloc1 00:15:55.987 03:48:14 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:56.246 03:48:15 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:56.504 03:48:15 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:56.763 03:48:15 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:56.763 03:48:15 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:56.763 03:48:15 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:57.022 Malloc2 00:15:57.282 03:48:15 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:57.282 03:48:16 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:57.540 03:48:16 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:57.799 03:48:16 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:57.799 03:48:16 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2359789 00:15:57.799 03:48:16 -- common/autotest_common.sh@926 -- # '[' -z 2359789 ']' 00:15:57.799 03:48:16 -- common/autotest_common.sh@930 -- # kill -0 2359789 00:15:57.799 03:48:16 -- common/autotest_common.sh@931 -- # uname 00:15:57.799 03:48:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:57.799 03:48:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2359789 00:15:57.799 03:48:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:57.799 03:48:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:57.799 03:48:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2359789' 00:15:57.799 killing process with pid 2359789 00:15:57.799 03:48:16 -- common/autotest_common.sh@945 -- # kill 2359789 00:15:57.799 03:48:16 -- common/autotest_common.sh@950 -- # wait 2359789 00:15:58.058 03:48:16 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:58.058 03:48:16 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:58.058 00:15:58.058 real 0m53.779s 00:15:58.058 user 3m32.872s 00:15:58.058 sys 0m4.584s 00:15:58.058 03:48:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.058 03:48:16 -- common/autotest_common.sh@10 -- # set +x 00:15:58.058 ************************************ 00:15:58.058 END TEST nvmf_vfio_user 00:15:58.058 ************************************ 00:15:58.319 03:48:17 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:58.319 03:48:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:58.319 03:48:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.319 03:48:17 -- common/autotest_common.sh@10 -- # set +x 00:15:58.319 ************************************ 00:15:58.319 START TEST nvmf_vfio_user_nvme_compliance 00:15:58.319 ************************************ 00:15:58.319 03:48:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:58.319 * Looking for test storage... 00:15:58.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:58.319 03:48:17 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.319 03:48:17 -- nvmf/common.sh@7 -- # uname -s 00:15:58.319 03:48:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.319 03:48:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.319 03:48:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.319 03:48:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.319 03:48:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.319 03:48:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.319 03:48:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.319 03:48:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.319 03:48:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.319 03:48:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.319 03:48:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.319 03:48:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.319 03:48:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.319 03:48:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.319 03:48:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.319 03:48:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.319 03:48:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.319 03:48:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.319 03:48:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.320 03:48:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.320 03:48:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.320 03:48:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.320 03:48:17 -- paths/export.sh@5 -- # export PATH 00:15:58.320 03:48:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.320 03:48:17 -- nvmf/common.sh@46 -- # : 0 00:15:58.320 03:48:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:58.320 03:48:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:58.320 03:48:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:58.320 03:48:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.320 03:48:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.320 03:48:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:58.320 03:48:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:58.320 03:48:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:58.320 03:48:17 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.320 03:48:17 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.320 03:48:17 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:58.320 03:48:17 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:58.320 03:48:17 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:58.320 03:48:17 -- compliance/compliance.sh@20 -- # nvmfpid=2360410 00:15:58.320 03:48:17 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:58.320 03:48:17 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2360410' 00:15:58.320 Process pid: 2360410 00:15:58.320 03:48:17 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:58.320 03:48:17 -- compliance/compliance.sh@24 -- # waitforlisten 2360410 00:15:58.320 03:48:17 -- common/autotest_common.sh@819 -- # '[' -z 2360410 ']' 00:15:58.320 03:48:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.320 03:48:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.320 03:48:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.320 03:48:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.320 03:48:17 -- common/autotest_common.sh@10 -- # set +x 00:15:58.320 [2024-07-14 03:48:17.120994] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:58.320 [2024-07-14 03:48:17.121081] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.320 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.320 [2024-07-14 03:48:17.184497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:58.579 [2024-07-14 03:48:17.275818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.579 [2024-07-14 03:48:17.275988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.579 [2024-07-14 03:48:17.276009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.579 [2024-07-14 03:48:17.276022] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.579 [2024-07-14 03:48:17.276096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.579 [2024-07-14 03:48:17.276153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.579 [2024-07-14 03:48:17.276156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.147 03:48:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:59.147 03:48:18 -- common/autotest_common.sh@852 -- # return 0 00:15:59.147 03:48:18 -- compliance/compliance.sh@26 -- # sleep 1 00:16:00.527 03:48:19 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:00.527 03:48:19 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:00.527 03:48:19 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:00.527 03:48:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.527 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:16:00.527 03:48:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.527 03:48:19 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:00.527 03:48:19 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:00.527 03:48:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.527 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:16:00.527 malloc0 00:16:00.527 03:48:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.527 03:48:19 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:00.528 03:48:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.528 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:16:00.528 03:48:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.528 03:48:19 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:00.528 03:48:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.528 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:16:00.528 03:48:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.528 03:48:19 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:00.528 03:48:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.528 03:48:19 -- common/autotest_common.sh@10 -- # set +x 00:16:00.528 03:48:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.528 03:48:19 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:00.528 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.528 00:16:00.528 00:16:00.528 CUnit - A unit testing framework for C - Version 2.1-3 00:16:00.528 http://cunit.sourceforge.net/ 00:16:00.528 00:16:00.528 00:16:00.528 Suite: nvme_compliance 00:16:00.528 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 03:48:19.300525] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:00.528 [2024-07-14 03:48:19.300567] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:00.528 [2024-07-14 03:48:19.300580] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:00.528 passed 00:16:00.528 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:00.786 Test: admin_identify_ns ...[2024-07-14 03:48:19.540894] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:00.786 [2024-07-14 03:48:19.548880] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:00.786 passed 00:16:00.786 Test: admin_get_features_mandatory_features ...passed 00:16:01.045 Test: admin_get_features_optional_features ...passed 00:16:01.045 Test: admin_set_features_number_of_queues ...passed 00:16:01.305 Test: admin_get_log_page_mandatory_logs ...passed 00:16:01.305 Test: admin_get_log_page_with_lpo ...[2024-07-14 03:48:20.172900] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:01.305 passed 00:16:01.563 Test: fabric_property_get ...passed 00:16:01.563 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 03:48:20.358629] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:01.563 passed 00:16:01.821 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 03:48:20.531896] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:01.821 [2024-07-14 03:48:20.547893] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:01.821 passed 00:16:01.821 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 03:48:20.636924] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:01.821 passed 00:16:02.079 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 03:48:20.800895] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:02.079 [2024-07-14 03:48:20.824891] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:02.079 passed 00:16:02.079 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 03:48:20.914840] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:02.079 [2024-07-14 03:48:20.914904] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:02.079 passed 00:16:02.339 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 03:48:21.091879] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:02.339 [2024-07-14 03:48:21.099895] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:02.339 [2024-07-14 03:48:21.107881] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:02.339 [2024-07-14 03:48:21.115873] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:02.339 passed 00:16:02.339 Test: admin_create_io_sq_verify_pc ...[2024-07-14 03:48:21.241888] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:02.597 passed 00:16:03.535 Test: admin_create_io_qp_max_qps ...[2024-07-14 03:48:22.441883] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:04.104 passed 00:16:04.364 Test: admin_create_io_sq_shared_cq ...[2024-07-14 03:48:23.047898] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:04.364 passed 00:16:04.364 00:16:04.364 Run Summary: Type Total Ran Passed Failed Inactive 00:16:04.364 suites 1 1 n/a 0 0 00:16:04.364 tests 18 18 18 0 0 00:16:04.364 asserts 360 360 360 0 n/a 00:16:04.364 00:16:04.364 Elapsed time = 1.571 seconds 00:16:04.364 03:48:23 -- compliance/compliance.sh@42 -- # killprocess 2360410 00:16:04.364 03:48:23 -- common/autotest_common.sh@926 -- # '[' -z 2360410 ']' 00:16:04.364 03:48:23 -- common/autotest_common.sh@930 -- # kill -0 2360410 00:16:04.364 03:48:23 -- common/autotest_common.sh@931 -- # uname 00:16:04.364 03:48:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.364 03:48:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2360410 00:16:04.364 03:48:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:04.364 03:48:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:04.364 03:48:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2360410' 00:16:04.364 killing process with pid 2360410 00:16:04.365 03:48:23 -- common/autotest_common.sh@945 -- # kill 2360410 00:16:04.365 03:48:23 -- common/autotest_common.sh@950 -- # wait 2360410 00:16:04.623 03:48:23 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:04.623 03:48:23 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:04.623 00:16:04.623 real 0m6.410s 00:16:04.623 user 0m18.365s 00:16:04.623 sys 0m0.574s 00:16:04.623 03:48:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.623 03:48:23 -- common/autotest_common.sh@10 -- # set +x 00:16:04.623 ************************************ 00:16:04.623 END TEST nvmf_vfio_user_nvme_compliance 00:16:04.623 ************************************ 00:16:04.623 03:48:23 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:04.623 03:48:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:04.623 03:48:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:04.623 03:48:23 -- common/autotest_common.sh@10 -- # set +x 00:16:04.623 ************************************ 00:16:04.623 START TEST nvmf_vfio_user_fuzz 00:16:04.623 ************************************ 00:16:04.623 03:48:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:04.623 * Looking for test storage... 00:16:04.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.623 03:48:23 -- nvmf/common.sh@7 -- # uname -s 00:16:04.623 03:48:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.623 03:48:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.623 03:48:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.623 03:48:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.623 03:48:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.623 03:48:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.623 03:48:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.623 03:48:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.623 03:48:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.623 03:48:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.623 03:48:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.623 03:48:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.623 03:48:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.623 03:48:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.623 03:48:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.623 03:48:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.623 03:48:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.623 03:48:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.623 03:48:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.623 03:48:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.623 03:48:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.623 03:48:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.623 03:48:23 -- paths/export.sh@5 -- # export PATH 00:16:04.623 03:48:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.623 03:48:23 -- nvmf/common.sh@46 -- # : 0 00:16:04.623 03:48:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:04.623 03:48:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:04.623 03:48:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:04.623 03:48:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.623 03:48:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.623 03:48:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:04.623 03:48:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:04.623 03:48:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2361278 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2361278' 00:16:04.623 Process pid: 2361278 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:04.623 03:48:23 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2361278 00:16:04.623 03:48:23 -- common/autotest_common.sh@819 -- # '[' -z 2361278 ']' 00:16:04.623 03:48:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.623 03:48:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:04.623 03:48:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.623 03:48:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:04.623 03:48:23 -- common/autotest_common.sh@10 -- # set +x 00:16:06.001 03:48:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.001 03:48:24 -- common/autotest_common.sh@852 -- # return 0 00:16:06.001 03:48:24 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:06.939 03:48:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.939 03:48:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.939 03:48:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:06.939 03:48:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.939 03:48:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.939 malloc0 00:16:06.939 03:48:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:06.939 03:48:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.939 03:48:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.939 03:48:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:06.939 03:48:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.939 03:48:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.939 03:48:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:06.939 03:48:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:06.939 03:48:25 -- common/autotest_common.sh@10 -- # set +x 00:16:06.939 03:48:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:06.939 03:48:25 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:39.055 Fuzzing completed. Shutting down the fuzz application 00:16:39.055 00:16:39.055 Dumping successful admin opcodes: 00:16:39.055 8, 9, 10, 24, 00:16:39.055 Dumping successful io opcodes: 00:16:39.055 0, 00:16:39.055 NS: 0x200003a1ef00 I/O qp, Total commands completed: 590941, total successful commands: 2281, random_seed: 1602463616 00:16:39.055 NS: 0x200003a1ef00 admin qp, Total commands completed: 147318, total successful commands: 1192, random_seed: 2335298496 00:16:39.056 03:48:56 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:39.056 03:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.056 03:48:56 -- common/autotest_common.sh@10 -- # set +x 00:16:39.056 03:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.056 03:48:56 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2361278 00:16:39.056 03:48:56 -- common/autotest_common.sh@926 -- # '[' -z 2361278 ']' 00:16:39.056 03:48:56 -- common/autotest_common.sh@930 -- # kill -0 2361278 00:16:39.056 03:48:56 -- common/autotest_common.sh@931 -- # uname 00:16:39.056 03:48:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:39.056 03:48:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2361278 00:16:39.056 03:48:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:39.056 03:48:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:39.056 03:48:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2361278' 00:16:39.056 killing process with pid 2361278 00:16:39.056 03:48:56 -- common/autotest_common.sh@945 -- # kill 2361278 00:16:39.056 03:48:56 -- common/autotest_common.sh@950 -- # wait 2361278 00:16:39.056 03:48:56 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:39.056 03:48:56 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:39.056 00:16:39.056 real 0m32.931s 00:16:39.056 user 0m33.378s 00:16:39.056 sys 0m26.457s 00:16:39.056 03:48:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.056 03:48:56 -- common/autotest_common.sh@10 -- # set +x 00:16:39.056 ************************************ 00:16:39.056 END TEST nvmf_vfio_user_fuzz 00:16:39.056 ************************************ 00:16:39.056 03:48:56 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.056 03:48:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.056 03:48:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.056 03:48:56 -- common/autotest_common.sh@10 -- # set +x 00:16:39.056 ************************************ 00:16:39.056 START TEST nvmf_host_management 00:16:39.056 ************************************ 00:16:39.056 03:48:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.056 * Looking for test storage... 00:16:39.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.056 03:48:56 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.056 03:48:56 -- nvmf/common.sh@7 -- # uname -s 00:16:39.056 03:48:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.056 03:48:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.056 03:48:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.056 03:48:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.056 03:48:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.056 03:48:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.056 03:48:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.056 03:48:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.056 03:48:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.056 03:48:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.056 03:48:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.056 03:48:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.056 03:48:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.056 03:48:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.056 03:48:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.056 03:48:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.056 03:48:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.056 03:48:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.056 03:48:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.056 03:48:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.056 03:48:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.056 03:48:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.056 03:48:56 -- paths/export.sh@5 -- # export PATH 00:16:39.056 03:48:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.056 03:48:56 -- nvmf/common.sh@46 -- # : 0 00:16:39.056 03:48:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.056 03:48:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.056 03:48:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.056 03:48:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.056 03:48:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.056 03:48:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.056 03:48:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.056 03:48:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.056 03:48:56 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.056 03:48:56 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.056 03:48:56 -- target/host_management.sh@104 -- # nvmftestinit 00:16:39.056 03:48:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.056 03:48:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.056 03:48:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.056 03:48:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.056 03:48:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.056 03:48:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.056 03:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.056 03:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.056 03:48:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.056 03:48:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.056 03:48:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.056 03:48:56 -- common/autotest_common.sh@10 -- # set +x 00:16:39.623 03:48:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:39.623 03:48:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:39.623 03:48:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:39.623 03:48:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:39.623 03:48:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:39.623 03:48:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:39.623 03:48:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:39.623 03:48:58 -- nvmf/common.sh@294 -- # net_devs=() 00:16:39.623 03:48:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:39.623 03:48:58 -- nvmf/common.sh@295 -- # e810=() 00:16:39.623 03:48:58 -- nvmf/common.sh@295 -- # local -ga e810 00:16:39.623 03:48:58 -- nvmf/common.sh@296 -- # x722=() 00:16:39.623 03:48:58 -- nvmf/common.sh@296 -- # local -ga x722 00:16:39.623 03:48:58 -- nvmf/common.sh@297 -- # mlx=() 00:16:39.623 03:48:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:39.623 03:48:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.623 03:48:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:39.623 03:48:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:39.623 03:48:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:39.623 03:48:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:39.623 03:48:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:39.623 03:48:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:39.623 03:48:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.623 03:48:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:39.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:39.623 03:48:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:39.624 03:48:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:39.624 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:39.624 03:48:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:39.624 03:48:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.624 03:48:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.624 03:48:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.624 03:48:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.624 03:48:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:39.624 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:39.624 03:48:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.624 03:48:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:39.624 03:48:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.624 03:48:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:39.624 03:48:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.624 03:48:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:39.624 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:39.624 03:48:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.624 03:48:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:39.624 03:48:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:39.624 03:48:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:39.624 03:48:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:39.624 03:48:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.624 03:48:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.624 03:48:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.624 03:48:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:39.624 03:48:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.624 03:48:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.624 03:48:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:39.624 03:48:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.624 03:48:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.624 03:48:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:39.624 03:48:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:39.624 03:48:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.624 03:48:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.624 03:48:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.624 03:48:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.624 03:48:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:39.624 03:48:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.624 03:48:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.624 03:48:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.624 03:48:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:39.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:16:39.624 00:16:39.624 --- 10.0.0.2 ping statistics --- 00:16:39.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.624 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:16:39.624 03:48:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:16:39.882 00:16:39.882 --- 10.0.0.1 ping statistics --- 00:16:39.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.882 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:39.882 03:48:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.882 03:48:58 -- nvmf/common.sh@410 -- # return 0 00:16:39.882 03:48:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.882 03:48:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.882 03:48:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:39.882 03:48:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:39.882 03:48:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.882 03:48:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:39.882 03:48:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:39.882 03:48:58 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:39.882 03:48:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:39.882 03:48:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.882 03:48:58 -- common/autotest_common.sh@10 -- # set +x 00:16:39.882 ************************************ 00:16:39.882 START TEST nvmf_host_management 00:16:39.882 ************************************ 00:16:39.882 03:48:58 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:39.882 03:48:58 -- target/host_management.sh@69 -- # starttarget 00:16:39.882 03:48:58 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:39.882 03:48:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.882 03:48:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:39.882 03:48:58 -- common/autotest_common.sh@10 -- # set +x 00:16:39.882 03:48:58 -- nvmf/common.sh@469 -- # nvmfpid=2366967 00:16:39.882 03:48:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:39.882 03:48:58 -- nvmf/common.sh@470 -- # waitforlisten 2366967 00:16:39.882 03:48:58 -- common/autotest_common.sh@819 -- # '[' -z 2366967 ']' 00:16:39.882 03:48:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.882 03:48:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:39.882 03:48:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.882 03:48:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:39.882 03:48:58 -- common/autotest_common.sh@10 -- # set +x 00:16:39.882 [2024-07-14 03:48:58.642691] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:39.882 [2024-07-14 03:48:58.642794] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.882 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.882 [2024-07-14 03:48:58.713560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.882 [2024-07-14 03:48:58.804878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:39.882 [2024-07-14 03:48:58.805051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.882 [2024-07-14 03:48:58.805071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.882 [2024-07-14 03:48:58.805087] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.882 [2024-07-14 03:48:58.805185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.882 [2024-07-14 03:48:58.805281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.882 [2024-07-14 03:48:58.805348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.883 [2024-07-14 03:48:58.805345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.818 03:48:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:40.818 03:48:59 -- common/autotest_common.sh@852 -- # return 0 00:16:40.818 03:48:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:40.818 03:48:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:40.818 03:48:59 -- common/autotest_common.sh@10 -- # set +x 00:16:40.818 03:48:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.818 03:48:59 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.818 03:48:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.818 03:48:59 -- common/autotest_common.sh@10 -- # set +x 00:16:40.818 [2024-07-14 03:48:59.595412] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.818 03:48:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.818 03:48:59 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:40.818 03:48:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:40.818 03:48:59 -- common/autotest_common.sh@10 -- # set +x 00:16:40.818 03:48:59 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:40.818 03:48:59 -- target/host_management.sh@23 -- # cat 00:16:40.818 03:48:59 -- target/host_management.sh@30 -- # rpc_cmd 00:16:40.818 03:48:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.818 03:48:59 -- common/autotest_common.sh@10 -- # set +x 00:16:40.818 Malloc0 00:16:40.818 [2024-07-14 03:48:59.659273] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.818 03:48:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.818 03:48:59 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:40.818 03:48:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:40.818 03:48:59 -- common/autotest_common.sh@10 -- # set +x 00:16:40.818 03:48:59 -- target/host_management.sh@73 -- # perfpid=2367108 00:16:40.818 03:48:59 -- target/host_management.sh@74 -- # waitforlisten 2367108 /var/tmp/bdevperf.sock 00:16:40.818 03:48:59 -- common/autotest_common.sh@819 -- # '[' -z 2367108 ']' 00:16:40.818 03:48:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.818 03:48:59 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:40.819 03:48:59 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:40.819 03:48:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:40.819 03:48:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.819 03:48:59 -- nvmf/common.sh@520 -- # config=() 00:16:40.819 03:48:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:40.819 03:48:59 -- nvmf/common.sh@520 -- # local subsystem config 00:16:40.819 03:48:59 -- common/autotest_common.sh@10 -- # set +x 00:16:40.819 03:48:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:40.819 03:48:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:40.819 { 00:16:40.819 "params": { 00:16:40.819 "name": "Nvme$subsystem", 00:16:40.819 "trtype": "$TEST_TRANSPORT", 00:16:40.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.819 "adrfam": "ipv4", 00:16:40.819 "trsvcid": "$NVMF_PORT", 00:16:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.819 "hdgst": ${hdgst:-false}, 00:16:40.819 "ddgst": ${ddgst:-false} 00:16:40.819 }, 00:16:40.819 "method": "bdev_nvme_attach_controller" 00:16:40.819 } 00:16:40.819 EOF 00:16:40.819 )") 00:16:40.819 03:48:59 -- nvmf/common.sh@542 -- # cat 00:16:40.819 03:48:59 -- nvmf/common.sh@544 -- # jq . 00:16:40.819 03:48:59 -- nvmf/common.sh@545 -- # IFS=, 00:16:40.819 03:48:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:40.819 "params": { 00:16:40.819 "name": "Nvme0", 00:16:40.819 "trtype": "tcp", 00:16:40.819 "traddr": "10.0.0.2", 00:16:40.819 "adrfam": "ipv4", 00:16:40.819 "trsvcid": "4420", 00:16:40.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:40.819 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:40.819 "hdgst": false, 00:16:40.819 "ddgst": false 00:16:40.819 }, 00:16:40.819 "method": "bdev_nvme_attach_controller" 00:16:40.819 }' 00:16:40.819 [2024-07-14 03:48:59.734449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:40.819 [2024-07-14 03:48:59.734542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367108 ] 00:16:41.078 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.078 [2024-07-14 03:48:59.798933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.078 [2024-07-14 03:48:59.883653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.336 Running I/O for 10 seconds... 00:16:41.903 03:49:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.903 03:49:00 -- common/autotest_common.sh@852 -- # return 0 00:16:41.903 03:49:00 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:41.903 03:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:41.903 03:49:00 -- common/autotest_common.sh@10 -- # set +x 00:16:41.903 03:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:41.903 03:49:00 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.903 03:49:00 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:41.903 03:49:00 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:41.903 03:49:00 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:41.903 03:49:00 -- target/host_management.sh@52 -- # local ret=1 00:16:41.903 03:49:00 -- target/host_management.sh@53 -- # local i 00:16:41.903 03:49:00 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:41.903 03:49:00 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:41.903 03:49:00 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:41.903 03:49:00 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:41.903 03:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:41.903 03:49:00 -- common/autotest_common.sh@10 -- # set +x 00:16:41.903 03:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:41.903 03:49:00 -- target/host_management.sh@55 -- # read_io_count=779 00:16:41.903 03:49:00 -- target/host_management.sh@58 -- # '[' 779 -ge 100 ']' 00:16:41.903 03:49:00 -- target/host_management.sh@59 -- # ret=0 00:16:41.903 03:49:00 -- target/host_management.sh@60 -- # break 00:16:41.903 03:49:00 -- target/host_management.sh@64 -- # return 0 00:16:41.903 03:49:00 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:41.903 03:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:41.903 03:49:00 -- common/autotest_common.sh@10 -- # set +x 00:16:41.903 [2024-07-14 03:49:00.706987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.903 [2024-07-14 03:49:00.707669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.707921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5faf0 is same with the state(5) to be set 00:16:41.904 [2024-07-14 03:49:00.708496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.708970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.708986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.904 [2024-07-14 03:49:00.709655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.904 [2024-07-14 03:49:00.709672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.709974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.709991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.905 [2024-07-14 03:49:00.710713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.905 [2024-07-14 03:49:00.710729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb84c00 is same with the state(5) to be set 00:16:41.905 [2024-07-14 03:49:00.710817] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb84c00 was disconnected and freed. reset controller. 00:16:41.905 03:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:41.905 03:49:00 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:41.905 03:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:41.905 03:49:00 -- common/autotest_common.sh@10 -- # set +x 00:16:41.905 [2024-07-14 03:49:00.711986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:41.905 task offset: 113408 on job bdev=Nvme0n1 fails 00:16:41.905 00:16:41.905 Latency(us) 00:16:41.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.905 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:41.905 Job: Nvme0n1 ended in about 0.44 seconds with error 00:16:41.905 Verification LBA range: start 0x0 length 0x400 00:16:41.905 Nvme0n1 : 0.44 1944.50 121.53 144.20 0.00 30220.12 3835.07 31651.46 00:16:41.905 =================================================================================================================== 00:16:41.905 Total : 1944.50 121.53 144.20 0.00 30220.12 3835.07 31651.46 00:16:41.905 [2024-07-14 03:49:00.713912] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:41.905 [2024-07-14 03:49:00.713943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87030 (9): Bad file descriptor 00:16:41.905 03:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:41.905 03:49:00 -- target/host_management.sh@87 -- # sleep 1 00:16:41.905 [2024-07-14 03:49:00.735157] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:42.839 03:49:01 -- target/host_management.sh@91 -- # kill -9 2367108 00:16:42.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2367108) - No such process 00:16:42.839 03:49:01 -- target/host_management.sh@91 -- # true 00:16:42.839 03:49:01 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:42.839 03:49:01 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:42.839 03:49:01 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:42.839 03:49:01 -- nvmf/common.sh@520 -- # config=() 00:16:42.839 03:49:01 -- nvmf/common.sh@520 -- # local subsystem config 00:16:42.839 03:49:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:42.839 03:49:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:42.839 { 00:16:42.839 "params": { 00:16:42.839 "name": "Nvme$subsystem", 00:16:42.839 "trtype": "$TEST_TRANSPORT", 00:16:42.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.839 "adrfam": "ipv4", 00:16:42.839 "trsvcid": "$NVMF_PORT", 00:16:42.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.839 "hdgst": ${hdgst:-false}, 00:16:42.839 "ddgst": ${ddgst:-false} 00:16:42.839 }, 00:16:42.839 "method": "bdev_nvme_attach_controller" 00:16:42.839 } 00:16:42.839 EOF 00:16:42.839 )") 00:16:42.839 03:49:01 -- nvmf/common.sh@542 -- # cat 00:16:42.839 03:49:01 -- nvmf/common.sh@544 -- # jq . 00:16:42.839 03:49:01 -- nvmf/common.sh@545 -- # IFS=, 00:16:42.839 03:49:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:42.839 "params": { 00:16:42.839 "name": "Nvme0", 00:16:42.839 "trtype": "tcp", 00:16:42.839 "traddr": "10.0.0.2", 00:16:42.839 "adrfam": "ipv4", 00:16:42.839 "trsvcid": "4420", 00:16:42.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:42.839 "hdgst": false, 00:16:42.839 "ddgst": false 00:16:42.839 }, 00:16:42.839 "method": "bdev_nvme_attach_controller" 00:16:42.839 }' 00:16:42.839 [2024-07-14 03:49:01.766382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:42.839 [2024-07-14 03:49:01.766470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367331 ] 00:16:43.097 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.097 [2024-07-14 03:49:01.828394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.097 [2024-07-14 03:49:01.911275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.355 Running I/O for 1 seconds... 00:16:44.727 00:16:44.727 Latency(us) 00:16:44.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.728 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.728 Verification LBA range: start 0x0 length 0x400 00:16:44.728 Nvme0n1 : 1.02 2291.95 143.25 0.00 0.00 27545.00 4490.43 28738.75 00:16:44.728 =================================================================================================================== 00:16:44.728 Total : 2291.95 143.25 0.00 0.00 27545.00 4490.43 28738.75 00:16:44.728 03:49:03 -- target/host_management.sh@101 -- # stoptarget 00:16:44.728 03:49:03 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:44.728 03:49:03 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:44.728 03:49:03 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:44.728 03:49:03 -- target/host_management.sh@40 -- # nvmftestfini 00:16:44.728 03:49:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:44.728 03:49:03 -- nvmf/common.sh@116 -- # sync 00:16:44.728 03:49:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:44.728 03:49:03 -- nvmf/common.sh@119 -- # set +e 00:16:44.728 03:49:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:44.728 03:49:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:44.728 rmmod nvme_tcp 00:16:44.728 rmmod nvme_fabrics 00:16:44.728 rmmod nvme_keyring 00:16:44.728 03:49:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.728 03:49:03 -- nvmf/common.sh@123 -- # set -e 00:16:44.728 03:49:03 -- nvmf/common.sh@124 -- # return 0 00:16:44.728 03:49:03 -- nvmf/common.sh@477 -- # '[' -n 2366967 ']' 00:16:44.728 03:49:03 -- nvmf/common.sh@478 -- # killprocess 2366967 00:16:44.728 03:49:03 -- common/autotest_common.sh@926 -- # '[' -z 2366967 ']' 00:16:44.728 03:49:03 -- common/autotest_common.sh@930 -- # kill -0 2366967 00:16:44.728 03:49:03 -- common/autotest_common.sh@931 -- # uname 00:16:44.728 03:49:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:44.728 03:49:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2366967 00:16:44.728 03:49:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:44.728 03:49:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:44.728 03:49:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2366967' 00:16:44.728 killing process with pid 2366967 00:16:44.728 03:49:03 -- common/autotest_common.sh@945 -- # kill 2366967 00:16:44.728 03:49:03 -- common/autotest_common.sh@950 -- # wait 2366967 00:16:44.986 [2024-07-14 03:49:03.820392] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:44.986 03:49:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:44.986 03:49:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:44.986 03:49:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:44.986 03:49:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.986 03:49:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:44.986 03:49:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.986 03:49:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.986 03:49:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.519 03:49:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:47.519 00:16:47.519 real 0m7.301s 00:16:47.519 user 0m22.602s 00:16:47.519 sys 0m1.362s 00:16:47.519 03:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.519 03:49:05 -- common/autotest_common.sh@10 -- # set +x 00:16:47.519 ************************************ 00:16:47.519 END TEST nvmf_host_management 00:16:47.519 ************************************ 00:16:47.519 03:49:05 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:47.519 00:16:47.519 real 0m9.506s 00:16:47.519 user 0m23.374s 00:16:47.519 sys 0m2.822s 00:16:47.519 03:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.519 03:49:05 -- common/autotest_common.sh@10 -- # set +x 00:16:47.519 ************************************ 00:16:47.519 END TEST nvmf_host_management 00:16:47.519 ************************************ 00:16:47.519 03:49:05 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.519 03:49:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:47.519 03:49:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:47.519 03:49:05 -- common/autotest_common.sh@10 -- # set +x 00:16:47.519 ************************************ 00:16:47.519 START TEST nvmf_lvol 00:16:47.519 ************************************ 00:16:47.519 03:49:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.519 * Looking for test storage... 00:16:47.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.519 03:49:05 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.519 03:49:05 -- nvmf/common.sh@7 -- # uname -s 00:16:47.519 03:49:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.519 03:49:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.519 03:49:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.519 03:49:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.519 03:49:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.519 03:49:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.519 03:49:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.519 03:49:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.519 03:49:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.519 03:49:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.519 03:49:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.519 03:49:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.519 03:49:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.519 03:49:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.519 03:49:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.519 03:49:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.519 03:49:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.519 03:49:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.519 03:49:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.519 03:49:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.519 03:49:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.519 03:49:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.519 03:49:06 -- paths/export.sh@5 -- # export PATH 00:16:47.519 03:49:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.519 03:49:06 -- nvmf/common.sh@46 -- # : 0 00:16:47.519 03:49:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:47.519 03:49:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:47.519 03:49:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:47.519 03:49:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.519 03:49:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.519 03:49:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:47.519 03:49:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:47.519 03:49:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:47.519 03:49:06 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.519 03:49:06 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.519 03:49:06 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:47.519 03:49:06 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:47.519 03:49:06 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.519 03:49:06 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:47.519 03:49:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:47.519 03:49:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.519 03:49:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:47.519 03:49:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:47.519 03:49:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:47.519 03:49:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.519 03:49:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.519 03:49:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.519 03:49:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:47.519 03:49:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:47.519 03:49:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:47.520 03:49:06 -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 03:49:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:49.422 03:49:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:49.422 03:49:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:49.422 03:49:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:49.422 03:49:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:49.422 03:49:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:49.422 03:49:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:49.422 03:49:07 -- nvmf/common.sh@294 -- # net_devs=() 00:16:49.422 03:49:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:49.422 03:49:07 -- nvmf/common.sh@295 -- # e810=() 00:16:49.422 03:49:07 -- nvmf/common.sh@295 -- # local -ga e810 00:16:49.422 03:49:07 -- nvmf/common.sh@296 -- # x722=() 00:16:49.422 03:49:07 -- nvmf/common.sh@296 -- # local -ga x722 00:16:49.422 03:49:07 -- nvmf/common.sh@297 -- # mlx=() 00:16:49.422 03:49:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:49.422 03:49:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.422 03:49:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:49.422 03:49:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:49.422 03:49:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:49.422 03:49:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:49.422 03:49:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:49.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:49.422 03:49:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:49.422 03:49:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:49.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:49.422 03:49:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:49.422 03:49:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:49.422 03:49:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.422 03:49:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:49.422 03:49:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.422 03:49:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:49.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:49.422 03:49:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.422 03:49:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:49.422 03:49:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.422 03:49:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:49.422 03:49:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.422 03:49:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:49.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:49.422 03:49:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.422 03:49:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:49.422 03:49:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:49.422 03:49:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:49.422 03:49:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:49.422 03:49:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.422 03:49:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.422 03:49:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.422 03:49:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:49.422 03:49:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.422 03:49:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.422 03:49:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:49.422 03:49:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.422 03:49:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.422 03:49:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:49.422 03:49:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:49.422 03:49:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.422 03:49:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.422 03:49:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.422 03:49:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.422 03:49:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:49.422 03:49:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.422 03:49:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.422 03:49:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.422 03:49:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:49.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:16:49.422 00:16:49.422 --- 10.0.0.2 ping statistics --- 00:16:49.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.422 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:16:49.422 03:49:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:16:49.423 00:16:49.423 --- 10.0.0.1 ping statistics --- 00:16:49.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.423 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:49.423 03:49:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.423 03:49:08 -- nvmf/common.sh@410 -- # return 0 00:16:49.423 03:49:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:49.423 03:49:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.423 03:49:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:49.423 03:49:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:49.423 03:49:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.423 03:49:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:49.423 03:49:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:49.423 03:49:08 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:49.423 03:49:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:49.423 03:49:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:49.423 03:49:08 -- common/autotest_common.sh@10 -- # set +x 00:16:49.423 03:49:08 -- nvmf/common.sh@469 -- # nvmfpid=2369567 00:16:49.423 03:49:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:49.423 03:49:08 -- nvmf/common.sh@470 -- # waitforlisten 2369567 00:16:49.423 03:49:08 -- common/autotest_common.sh@819 -- # '[' -z 2369567 ']' 00:16:49.423 03:49:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.423 03:49:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:49.423 03:49:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.423 03:49:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:49.423 03:49:08 -- common/autotest_common.sh@10 -- # set +x 00:16:49.423 [2024-07-14 03:49:08.176714] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:49.423 [2024-07-14 03:49:08.176790] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.423 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.423 [2024-07-14 03:49:08.243244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.423 [2024-07-14 03:49:08.330244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:49.423 [2024-07-14 03:49:08.330398] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.423 [2024-07-14 03:49:08.330422] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.423 [2024-07-14 03:49:08.330435] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.423 [2024-07-14 03:49:08.330526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.423 [2024-07-14 03:49:08.332888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.423 [2024-07-14 03:49:08.332899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.358 03:49:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:50.358 03:49:09 -- common/autotest_common.sh@852 -- # return 0 00:16:50.358 03:49:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:50.358 03:49:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:50.358 03:49:09 -- common/autotest_common.sh@10 -- # set +x 00:16:50.358 03:49:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.358 03:49:09 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.615 [2024-07-14 03:49:09.375653] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.615 03:49:09 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.873 03:49:09 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:50.873 03:49:09 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:51.132 03:49:09 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:51.132 03:49:09 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:51.389 03:49:10 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:51.647 03:49:10 -- target/nvmf_lvol.sh@29 -- # lvs=e749b879-a2ad-4c7f-a448-b6699eb8e45b 00:16:51.647 03:49:10 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e749b879-a2ad-4c7f-a448-b6699eb8e45b lvol 20 00:16:51.905 03:49:10 -- target/nvmf_lvol.sh@32 -- # lvol=dc26000f-9d9b-4799-bda4-d87e1e3c0b9d 00:16:51.905 03:49:10 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:52.162 03:49:10 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc26000f-9d9b-4799-bda4-d87e1e3c0b9d 00:16:52.420 03:49:11 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.677 [2024-07-14 03:49:11.367358] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.677 03:49:11 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.935 03:49:11 -- target/nvmf_lvol.sh@42 -- # perf_pid=2370006 00:16:52.935 03:49:11 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:52.935 03:49:11 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:52.935 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.870 03:49:12 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dc26000f-9d9b-4799-bda4-d87e1e3c0b9d MY_SNAPSHOT 00:16:54.128 03:49:12 -- target/nvmf_lvol.sh@47 -- # snapshot=e8f3f6ce-6811-4106-adf3-11877ddd9f81 00:16:54.128 03:49:12 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dc26000f-9d9b-4799-bda4-d87e1e3c0b9d 30 00:16:54.414 03:49:13 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e8f3f6ce-6811-4106-adf3-11877ddd9f81 MY_CLONE 00:16:54.672 03:49:13 -- target/nvmf_lvol.sh@49 -- # clone=6b1eb26a-db64-4c17-8870-d855a1c0da8d 00:16:54.672 03:49:13 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6b1eb26a-db64-4c17-8870-d855a1c0da8d 00:16:55.236 03:49:13 -- target/nvmf_lvol.sh@53 -- # wait 2370006 00:17:03.341 Initializing NVMe Controllers 00:17:03.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:03.341 Controller IO queue size 128, less than required. 00:17:03.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:03.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:03.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:03.341 Initialization complete. Launching workers. 00:17:03.341 ======================================================== 00:17:03.341 Latency(us) 00:17:03.341 Device Information : IOPS MiB/s Average min max 00:17:03.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11053.70 43.18 11582.97 1735.95 85364.28 00:17:03.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10956.20 42.80 11685.97 2201.36 65981.49 00:17:03.341 ======================================================== 00:17:03.341 Total : 22009.90 85.98 11634.24 1735.95 85364.28 00:17:03.341 00:17:03.341 03:49:21 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:03.341 03:49:22 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc26000f-9d9b-4799-bda4-d87e1e3c0b9d 00:17:03.598 03:49:22 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e749b879-a2ad-4c7f-a448-b6699eb8e45b 00:17:03.856 03:49:22 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:03.856 03:49:22 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:03.856 03:49:22 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:03.856 03:49:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:03.856 03:49:22 -- nvmf/common.sh@116 -- # sync 00:17:03.856 03:49:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:03.856 03:49:22 -- nvmf/common.sh@119 -- # set +e 00:17:03.856 03:49:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:03.856 03:49:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:03.856 rmmod nvme_tcp 00:17:03.856 rmmod nvme_fabrics 00:17:03.856 rmmod nvme_keyring 00:17:03.856 03:49:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:03.856 03:49:22 -- nvmf/common.sh@123 -- # set -e 00:17:03.856 03:49:22 -- nvmf/common.sh@124 -- # return 0 00:17:03.856 03:49:22 -- nvmf/common.sh@477 -- # '[' -n 2369567 ']' 00:17:03.856 03:49:22 -- nvmf/common.sh@478 -- # killprocess 2369567 00:17:03.856 03:49:22 -- common/autotest_common.sh@926 -- # '[' -z 2369567 ']' 00:17:03.856 03:49:22 -- common/autotest_common.sh@930 -- # kill -0 2369567 00:17:03.856 03:49:22 -- common/autotest_common.sh@931 -- # uname 00:17:03.856 03:49:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.856 03:49:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2369567 00:17:03.856 03:49:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:03.857 03:49:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:03.857 03:49:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2369567' 00:17:03.857 killing process with pid 2369567 00:17:03.857 03:49:22 -- common/autotest_common.sh@945 -- # kill 2369567 00:17:03.857 03:49:22 -- common/autotest_common.sh@950 -- # wait 2369567 00:17:04.428 03:49:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:04.429 03:49:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:04.429 03:49:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:04.429 03:49:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.429 03:49:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:04.429 03:49:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.429 03:49:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.429 03:49:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.332 03:49:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:06.332 00:17:06.332 real 0m19.178s 00:17:06.332 user 1m5.266s 00:17:06.332 sys 0m5.609s 00:17:06.332 03:49:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.332 03:49:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.332 ************************************ 00:17:06.332 END TEST nvmf_lvol 00:17:06.332 ************************************ 00:17:06.332 03:49:25 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:06.332 03:49:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:06.332 03:49:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:06.332 03:49:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.332 ************************************ 00:17:06.332 START TEST nvmf_lvs_grow 00:17:06.332 ************************************ 00:17:06.332 03:49:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:06.332 * Looking for test storage... 00:17:06.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.332 03:49:25 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.332 03:49:25 -- nvmf/common.sh@7 -- # uname -s 00:17:06.332 03:49:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.332 03:49:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.332 03:49:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.332 03:49:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.332 03:49:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.332 03:49:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.332 03:49:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.332 03:49:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.332 03:49:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.332 03:49:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.332 03:49:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.332 03:49:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.332 03:49:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.332 03:49:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.332 03:49:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.333 03:49:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.333 03:49:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.333 03:49:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.333 03:49:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.333 03:49:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.333 03:49:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.333 03:49:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.333 03:49:25 -- paths/export.sh@5 -- # export PATH 00:17:06.333 03:49:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.333 03:49:25 -- nvmf/common.sh@46 -- # : 0 00:17:06.333 03:49:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:06.333 03:49:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:06.333 03:49:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:06.333 03:49:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.333 03:49:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.333 03:49:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:06.333 03:49:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:06.333 03:49:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:06.333 03:49:25 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.333 03:49:25 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.333 03:49:25 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:06.333 03:49:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:06.333 03:49:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.333 03:49:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:06.333 03:49:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:06.333 03:49:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:06.333 03:49:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.333 03:49:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.333 03:49:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.333 03:49:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:06.333 03:49:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:06.333 03:49:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:06.333 03:49:25 -- common/autotest_common.sh@10 -- # set +x 00:17:08.861 03:49:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:08.861 03:49:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:08.861 03:49:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:08.861 03:49:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:08.861 03:49:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:08.861 03:49:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:08.861 03:49:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:08.861 03:49:27 -- nvmf/common.sh@294 -- # net_devs=() 00:17:08.861 03:49:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:08.861 03:49:27 -- nvmf/common.sh@295 -- # e810=() 00:17:08.861 03:49:27 -- nvmf/common.sh@295 -- # local -ga e810 00:17:08.861 03:49:27 -- nvmf/common.sh@296 -- # x722=() 00:17:08.861 03:49:27 -- nvmf/common.sh@296 -- # local -ga x722 00:17:08.861 03:49:27 -- nvmf/common.sh@297 -- # mlx=() 00:17:08.861 03:49:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:08.861 03:49:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.861 03:49:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:08.861 03:49:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:08.861 03:49:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:08.861 03:49:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.861 03:49:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.861 03:49:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.861 03:49:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.861 03:49:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:08.861 03:49:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:08.861 03:49:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.861 03:49:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.861 03:49:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.861 03:49:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.861 03:49:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.861 03:49:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.861 03:49:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.861 03:49:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.861 03:49:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.861 03:49:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.861 03:49:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.861 03:49:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.862 03:49:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:08.862 03:49:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:08.862 03:49:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:08.862 03:49:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:08.862 03:49:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:08.862 03:49:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.862 03:49:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.862 03:49:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.862 03:49:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:08.862 03:49:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.862 03:49:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.862 03:49:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:08.862 03:49:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.862 03:49:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.862 03:49:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:08.862 03:49:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:08.862 03:49:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.862 03:49:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.862 03:49:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.862 03:49:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.862 03:49:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:08.862 03:49:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.862 03:49:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.862 03:49:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.862 03:49:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:08.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:17:08.862 00:17:08.862 --- 10.0.0.2 ping statistics --- 00:17:08.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.862 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:17:08.862 03:49:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:17:08.862 00:17:08.862 --- 10.0.0.1 ping statistics --- 00:17:08.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.862 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:17:08.862 03:49:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.862 03:49:27 -- nvmf/common.sh@410 -- # return 0 00:17:08.862 03:49:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.862 03:49:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.862 03:49:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.862 03:49:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.862 03:49:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.862 03:49:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.862 03:49:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.862 03:49:27 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:08.862 03:49:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.862 03:49:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:08.862 03:49:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.862 03:49:27 -- nvmf/common.sh@469 -- # nvmfpid=2373321 00:17:08.862 03:49:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.862 03:49:27 -- nvmf/common.sh@470 -- # waitforlisten 2373321 00:17:08.862 03:49:27 -- common/autotest_common.sh@819 -- # '[' -z 2373321 ']' 00:17:08.862 03:49:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.862 03:49:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.862 03:49:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.862 03:49:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.862 03:49:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.862 [2024-07-14 03:49:27.418008] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:08.862 [2024-07-14 03:49:27.418080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.862 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.862 [2024-07-14 03:49:27.481393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.862 [2024-07-14 03:49:27.565480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.862 [2024-07-14 03:49:27.565625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.862 [2024-07-14 03:49:27.565641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.862 [2024-07-14 03:49:27.565653] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.862 [2024-07-14 03:49:27.565690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.794 03:49:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.794 03:49:28 -- common/autotest_common.sh@852 -- # return 0 00:17:09.794 03:49:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.794 03:49:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:09.794 03:49:28 -- common/autotest_common.sh@10 -- # set +x 00:17:09.794 03:49:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:09.794 [2024-07-14 03:49:28.631931] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:09.794 03:49:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:09.794 03:49:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:09.794 03:49:28 -- common/autotest_common.sh@10 -- # set +x 00:17:09.794 ************************************ 00:17:09.794 START TEST lvs_grow_clean 00:17:09.794 ************************************ 00:17:09.794 03:49:28 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.794 03:49:28 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.051 03:49:28 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:10.051 03:49:28 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:10.309 03:49:29 -- target/nvmf_lvs_grow.sh@28 -- # lvs=43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:10.309 03:49:29 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:10.309 03:49:29 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:10.567 03:49:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:10.567 03:49:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:10.567 03:49:29 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 lvol 150 00:17:10.825 03:49:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e22bf10c-531a-4d85-bef9-a4af69b1f9b4 00:17:10.825 03:49:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.825 03:49:29 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:11.082 [2024-07-14 03:49:29.859999] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:11.082 [2024-07-14 03:49:29.860077] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:11.082 true 00:17:11.082 03:49:29 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:11.082 03:49:29 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:11.339 03:49:30 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:11.339 03:49:30 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:11.597 03:49:30 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e22bf10c-531a-4d85-bef9-a4af69b1f9b4 00:17:11.854 03:49:30 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:12.111 [2024-07-14 03:49:30.818991] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.111 03:49:30 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.369 03:49:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2373774 00:17:12.369 03:49:31 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:12.369 03:49:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.369 03:49:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2373774 /var/tmp/bdevperf.sock 00:17:12.369 03:49:31 -- common/autotest_common.sh@819 -- # '[' -z 2373774 ']' 00:17:12.369 03:49:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.369 03:49:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:12.369 03:49:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.369 03:49:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:12.369 03:49:31 -- common/autotest_common.sh@10 -- # set +x 00:17:12.369 [2024-07-14 03:49:31.113224] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:12.369 [2024-07-14 03:49:31.113293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373774 ] 00:17:12.369 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.369 [2024-07-14 03:49:31.175207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.369 [2024-07-14 03:49:31.265762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.302 03:49:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.302 03:49:32 -- common/autotest_common.sh@852 -- # return 0 00:17:13.302 03:49:32 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:13.560 Nvme0n1 00:17:13.560 03:49:32 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:13.817 [ 00:17:13.817 { 00:17:13.817 "name": "Nvme0n1", 00:17:13.817 "aliases": [ 00:17:13.817 "e22bf10c-531a-4d85-bef9-a4af69b1f9b4" 00:17:13.817 ], 00:17:13.817 "product_name": "NVMe disk", 00:17:13.817 "block_size": 4096, 00:17:13.817 "num_blocks": 38912, 00:17:13.817 "uuid": "e22bf10c-531a-4d85-bef9-a4af69b1f9b4", 00:17:13.817 "assigned_rate_limits": { 00:17:13.817 "rw_ios_per_sec": 0, 00:17:13.817 "rw_mbytes_per_sec": 0, 00:17:13.817 "r_mbytes_per_sec": 0, 00:17:13.817 "w_mbytes_per_sec": 0 00:17:13.817 }, 00:17:13.817 "claimed": false, 00:17:13.817 "zoned": false, 00:17:13.817 "supported_io_types": { 00:17:13.817 "read": true, 00:17:13.817 "write": true, 00:17:13.817 "unmap": true, 00:17:13.817 "write_zeroes": true, 00:17:13.817 "flush": true, 00:17:13.817 "reset": true, 00:17:13.817 "compare": true, 00:17:13.817 "compare_and_write": true, 00:17:13.817 "abort": true, 00:17:13.817 "nvme_admin": true, 00:17:13.817 "nvme_io": true 00:17:13.817 }, 00:17:13.817 "driver_specific": { 00:17:13.817 "nvme": [ 00:17:13.817 { 00:17:13.817 "trid": { 00:17:13.817 "trtype": "TCP", 00:17:13.817 "adrfam": "IPv4", 00:17:13.817 "traddr": "10.0.0.2", 00:17:13.817 "trsvcid": "4420", 00:17:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:13.817 }, 00:17:13.817 "ctrlr_data": { 00:17:13.817 "cntlid": 1, 00:17:13.817 "vendor_id": "0x8086", 00:17:13.817 "model_number": "SPDK bdev Controller", 00:17:13.817 "serial_number": "SPDK0", 00:17:13.817 "firmware_revision": "24.01.1", 00:17:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:13.817 "oacs": { 00:17:13.817 "security": 0, 00:17:13.817 "format": 0, 00:17:13.817 "firmware": 0, 00:17:13.817 "ns_manage": 0 00:17:13.817 }, 00:17:13.817 "multi_ctrlr": true, 00:17:13.817 "ana_reporting": false 00:17:13.817 }, 00:17:13.817 "vs": { 00:17:13.817 "nvme_version": "1.3" 00:17:13.817 }, 00:17:13.817 "ns_data": { 00:17:13.817 "id": 1, 00:17:13.817 "can_share": true 00:17:13.817 } 00:17:13.817 } 00:17:13.817 ], 00:17:13.817 "mp_policy": "active_passive" 00:17:13.817 } 00:17:13.817 } 00:17:13.817 ] 00:17:13.817 03:49:32 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2374041 00:17:13.817 03:49:32 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.817 03:49:32 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:14.076 Running I/O for 10 seconds... 00:17:15.010 Latency(us) 00:17:15.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.010 Nvme0n1 : 1.00 14701.00 57.43 0.00 0.00 0.00 0.00 0.00 00:17:15.010 =================================================================================================================== 00:17:15.010 Total : 14701.00 57.43 0.00 0.00 0.00 0.00 0.00 00:17:15.010 00:17:16.015 03:49:34 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:16.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.015 Nvme0n1 : 2.00 14736.00 57.56 0.00 0.00 0.00 0.00 0.00 00:17:16.015 =================================================================================================================== 00:17:16.015 Total : 14736.00 57.56 0.00 0.00 0.00 0.00 0.00 00:17:16.015 00:17:16.300 true 00:17:16.300 03:49:34 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:16.300 03:49:34 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:16.300 03:49:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:16.300 03:49:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:16.300 03:49:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 2374041 00:17:16.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.868 Nvme0n1 : 3.00 14793.33 57.79 0.00 0.00 0.00 0.00 0.00 00:17:16.868 =================================================================================================================== 00:17:16.868 Total : 14793.33 57.79 0.00 0.00 0.00 0.00 0.00 00:17:16.868 00:17:18.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.246 Nvme0n1 : 4.00 14851.75 58.01 0.00 0.00 0.00 0.00 0.00 00:17:18.246 =================================================================================================================== 00:17:18.246 Total : 14851.75 58.01 0.00 0.00 0.00 0.00 0.00 00:17:18.246 00:17:19.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.181 Nvme0n1 : 5.00 14892.00 58.17 0.00 0.00 0.00 0.00 0.00 00:17:19.181 =================================================================================================================== 00:17:19.181 Total : 14892.00 58.17 0.00 0.00 0.00 0.00 0.00 00:17:19.181 00:17:20.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.124 Nvme0n1 : 6.00 14927.33 58.31 0.00 0.00 0.00 0.00 0.00 00:17:20.124 =================================================================================================================== 00:17:20.124 Total : 14927.33 58.31 0.00 0.00 0.00 0.00 0.00 00:17:20.124 00:17:21.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.058 Nvme0n1 : 7.00 14959.86 58.44 0.00 0.00 0.00 0.00 0.00 00:17:21.058 =================================================================================================================== 00:17:21.058 Total : 14959.86 58.44 0.00 0.00 0.00 0.00 0.00 00:17:21.058 00:17:21.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.993 Nvme0n1 : 8.00 14980.00 58.52 0.00 0.00 0.00 0.00 0.00 00:17:21.993 =================================================================================================================== 00:17:21.993 Total : 14980.00 58.52 0.00 0.00 0.00 0.00 0.00 00:17:21.993 00:17:22.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.926 Nvme0n1 : 9.00 15006.11 58.62 0.00 0.00 0.00 0.00 0.00 00:17:22.926 =================================================================================================================== 00:17:22.926 Total : 15006.11 58.62 0.00 0.00 0.00 0.00 0.00 00:17:22.926 00:17:24.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.299 Nvme0n1 : 10.00 15036.40 58.74 0.00 0.00 0.00 0.00 0.00 00:17:24.299 =================================================================================================================== 00:17:24.299 Total : 15036.40 58.74 0.00 0.00 0.00 0.00 0.00 00:17:24.299 00:17:24.299 00:17:24.299 Latency(us) 00:17:24.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.299 Nvme0n1 : 10.01 15037.09 58.74 0.00 0.00 8505.85 5631.24 16117.00 00:17:24.299 =================================================================================================================== 00:17:24.299 Total : 15037.09 58.74 0.00 0.00 8505.85 5631.24 16117.00 00:17:24.299 0 00:17:24.299 03:49:42 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2373774 00:17:24.299 03:49:42 -- common/autotest_common.sh@926 -- # '[' -z 2373774 ']' 00:17:24.299 03:49:42 -- common/autotest_common.sh@930 -- # kill -0 2373774 00:17:24.299 03:49:42 -- common/autotest_common.sh@931 -- # uname 00:17:24.299 03:49:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.299 03:49:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2373774 00:17:24.299 03:49:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:24.299 03:49:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:24.299 03:49:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2373774' 00:17:24.299 killing process with pid 2373774 00:17:24.299 03:49:42 -- common/autotest_common.sh@945 -- # kill 2373774 00:17:24.299 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.299 00:17:24.299 Latency(us) 00:17:24.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.299 =================================================================================================================== 00:17:24.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.299 03:49:42 -- common/autotest_common.sh@950 -- # wait 2373774 00:17:24.299 03:49:43 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:24.556 03:49:43 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:24.556 03:49:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:24.813 03:49:43 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:24.813 03:49:43 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:24.813 03:49:43 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:25.071 [2024-07-14 03:49:43.857623] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:25.071 03:49:43 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:25.071 03:49:43 -- common/autotest_common.sh@640 -- # local es=0 00:17:25.071 03:49:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:25.071 03:49:43 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.071 03:49:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:25.071 03:49:43 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.071 03:49:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:25.071 03:49:43 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.071 03:49:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:25.071 03:49:43 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.071 03:49:43 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:25.071 03:49:43 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:25.329 request: 00:17:25.329 { 00:17:25.329 "uuid": "43bcc94f-1fcb-43dd-bb27-0714e9eabf29", 00:17:25.329 "method": "bdev_lvol_get_lvstores", 00:17:25.329 "req_id": 1 00:17:25.329 } 00:17:25.329 Got JSON-RPC error response 00:17:25.329 response: 00:17:25.329 { 00:17:25.329 "code": -19, 00:17:25.329 "message": "No such device" 00:17:25.329 } 00:17:25.329 03:49:44 -- common/autotest_common.sh@643 -- # es=1 00:17:25.329 03:49:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:25.329 03:49:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:25.329 03:49:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:25.329 03:49:44 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:25.586 aio_bdev 00:17:25.587 03:49:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e22bf10c-531a-4d85-bef9-a4af69b1f9b4 00:17:25.587 03:49:44 -- common/autotest_common.sh@887 -- # local bdev_name=e22bf10c-531a-4d85-bef9-a4af69b1f9b4 00:17:25.587 03:49:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:25.587 03:49:44 -- common/autotest_common.sh@889 -- # local i 00:17:25.587 03:49:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:25.587 03:49:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:25.587 03:49:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:25.844 03:49:44 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e22bf10c-531a-4d85-bef9-a4af69b1f9b4 -t 2000 00:17:26.102 [ 00:17:26.102 { 00:17:26.102 "name": "e22bf10c-531a-4d85-bef9-a4af69b1f9b4", 00:17:26.102 "aliases": [ 00:17:26.102 "lvs/lvol" 00:17:26.102 ], 00:17:26.102 "product_name": "Logical Volume", 00:17:26.102 "block_size": 4096, 00:17:26.102 "num_blocks": 38912, 00:17:26.102 "uuid": "e22bf10c-531a-4d85-bef9-a4af69b1f9b4", 00:17:26.102 "assigned_rate_limits": { 00:17:26.102 "rw_ios_per_sec": 0, 00:17:26.102 "rw_mbytes_per_sec": 0, 00:17:26.102 "r_mbytes_per_sec": 0, 00:17:26.102 "w_mbytes_per_sec": 0 00:17:26.102 }, 00:17:26.102 "claimed": false, 00:17:26.102 "zoned": false, 00:17:26.102 "supported_io_types": { 00:17:26.102 "read": true, 00:17:26.102 "write": true, 00:17:26.102 "unmap": true, 00:17:26.102 "write_zeroes": true, 00:17:26.102 "flush": false, 00:17:26.102 "reset": true, 00:17:26.102 "compare": false, 00:17:26.102 "compare_and_write": false, 00:17:26.102 "abort": false, 00:17:26.102 "nvme_admin": false, 00:17:26.102 "nvme_io": false 00:17:26.102 }, 00:17:26.102 "driver_specific": { 00:17:26.102 "lvol": { 00:17:26.102 "lvol_store_uuid": "43bcc94f-1fcb-43dd-bb27-0714e9eabf29", 00:17:26.102 "base_bdev": "aio_bdev", 00:17:26.102 "thin_provision": false, 00:17:26.102 "snapshot": false, 00:17:26.102 "clone": false, 00:17:26.102 "esnap_clone": false 00:17:26.102 } 00:17:26.102 } 00:17:26.102 } 00:17:26.102 ] 00:17:26.102 03:49:44 -- common/autotest_common.sh@895 -- # return 0 00:17:26.102 03:49:44 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:26.102 03:49:44 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:26.360 03:49:45 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:26.360 03:49:45 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:26.360 03:49:45 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:26.618 03:49:45 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:26.618 03:49:45 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e22bf10c-531a-4d85-bef9-a4af69b1f9b4 00:17:26.876 03:49:45 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43bcc94f-1fcb-43dd-bb27-0714e9eabf29 00:17:27.134 03:49:45 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.391 00:17:27.391 real 0m17.443s 00:17:27.391 user 0m17.153s 00:17:27.391 sys 0m1.785s 00:17:27.391 03:49:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.391 03:49:46 -- common/autotest_common.sh@10 -- # set +x 00:17:27.391 ************************************ 00:17:27.391 END TEST lvs_grow_clean 00:17:27.391 ************************************ 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:27.391 03:49:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:27.391 03:49:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:27.391 03:49:46 -- common/autotest_common.sh@10 -- # set +x 00:17:27.391 ************************************ 00:17:27.391 START TEST lvs_grow_dirty 00:17:27.391 ************************************ 00:17:27.391 03:49:46 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.391 03:49:46 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.649 03:49:46 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:27.649 03:49:46 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:27.906 03:49:46 -- target/nvmf_lvs_grow.sh@28 -- # lvs=cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:27.906 03:49:46 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:27.906 03:49:46 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:28.164 03:49:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:28.164 03:49:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:28.164 03:49:46 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb15c05c-55bf-4800-87b5-0776609ecd73 lvol 150 00:17:28.164 03:49:47 -- target/nvmf_lvs_grow.sh@33 -- # lvol=397f2a49-8984-4667-abaf-afc47dd3b5ad 00:17:28.164 03:49:47 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.164 03:49:47 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:28.422 [2024-07-14 03:49:47.318951] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:28.422 [2024-07-14 03:49:47.319046] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:28.422 true 00:17:28.422 03:49:47 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:28.422 03:49:47 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:28.680 03:49:47 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:28.680 03:49:47 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:28.938 03:49:47 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 397f2a49-8984-4667-abaf-afc47dd3b5ad 00:17:29.195 03:49:48 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:29.453 03:49:48 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:29.712 03:49:48 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2376005 00:17:29.712 03:49:48 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:29.712 03:49:48 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.712 03:49:48 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2376005 /var/tmp/bdevperf.sock 00:17:29.712 03:49:48 -- common/autotest_common.sh@819 -- # '[' -z 2376005 ']' 00:17:29.712 03:49:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.712 03:49:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.712 03:49:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.712 03:49:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.712 03:49:48 -- common/autotest_common.sh@10 -- # set +x 00:17:29.712 [2024-07-14 03:49:48.530941] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:29.712 [2024-07-14 03:49:48.531013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376005 ] 00:17:29.712 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.712 [2024-07-14 03:49:48.593065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.970 [2024-07-14 03:49:48.682682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.902 03:49:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.902 03:49:49 -- common/autotest_common.sh@852 -- # return 0 00:17:30.902 03:49:49 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:31.159 Nvme0n1 00:17:31.160 03:49:49 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:31.417 [ 00:17:31.417 { 00:17:31.417 "name": "Nvme0n1", 00:17:31.417 "aliases": [ 00:17:31.417 "397f2a49-8984-4667-abaf-afc47dd3b5ad" 00:17:31.417 ], 00:17:31.417 "product_name": "NVMe disk", 00:17:31.417 "block_size": 4096, 00:17:31.417 "num_blocks": 38912, 00:17:31.417 "uuid": "397f2a49-8984-4667-abaf-afc47dd3b5ad", 00:17:31.417 "assigned_rate_limits": { 00:17:31.417 "rw_ios_per_sec": 0, 00:17:31.417 "rw_mbytes_per_sec": 0, 00:17:31.417 "r_mbytes_per_sec": 0, 00:17:31.417 "w_mbytes_per_sec": 0 00:17:31.417 }, 00:17:31.417 "claimed": false, 00:17:31.417 "zoned": false, 00:17:31.417 "supported_io_types": { 00:17:31.417 "read": true, 00:17:31.417 "write": true, 00:17:31.417 "unmap": true, 00:17:31.417 "write_zeroes": true, 00:17:31.417 "flush": true, 00:17:31.417 "reset": true, 00:17:31.417 "compare": true, 00:17:31.417 "compare_and_write": true, 00:17:31.417 "abort": true, 00:17:31.417 "nvme_admin": true, 00:17:31.417 "nvme_io": true 00:17:31.417 }, 00:17:31.417 "driver_specific": { 00:17:31.417 "nvme": [ 00:17:31.417 { 00:17:31.417 "trid": { 00:17:31.417 "trtype": "TCP", 00:17:31.417 "adrfam": "IPv4", 00:17:31.417 "traddr": "10.0.0.2", 00:17:31.417 "trsvcid": "4420", 00:17:31.417 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:31.417 }, 00:17:31.417 "ctrlr_data": { 00:17:31.417 "cntlid": 1, 00:17:31.417 "vendor_id": "0x8086", 00:17:31.417 "model_number": "SPDK bdev Controller", 00:17:31.417 "serial_number": "SPDK0", 00:17:31.417 "firmware_revision": "24.01.1", 00:17:31.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.417 "oacs": { 00:17:31.417 "security": 0, 00:17:31.417 "format": 0, 00:17:31.417 "firmware": 0, 00:17:31.417 "ns_manage": 0 00:17:31.417 }, 00:17:31.417 "multi_ctrlr": true, 00:17:31.417 "ana_reporting": false 00:17:31.417 }, 00:17:31.417 "vs": { 00:17:31.417 "nvme_version": "1.3" 00:17:31.417 }, 00:17:31.417 "ns_data": { 00:17:31.417 "id": 1, 00:17:31.417 "can_share": true 00:17:31.417 } 00:17:31.417 } 00:17:31.417 ], 00:17:31.417 "mp_policy": "active_passive" 00:17:31.417 } 00:17:31.417 } 00:17:31.417 ] 00:17:31.417 03:49:50 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2376154 00:17:31.417 03:49:50 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:31.417 03:49:50 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.417 Running I/O for 10 seconds... 00:17:32.820 Latency(us) 00:17:32.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.820 Nvme0n1 : 1.00 13515.00 52.79 0.00 0.00 0.00 0.00 0.00 00:17:32.820 =================================================================================================================== 00:17:32.820 Total : 13515.00 52.79 0.00 0.00 0.00 0.00 0.00 00:17:32.820 00:17:33.387 03:49:52 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:33.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.645 Nvme0n1 : 2.00 13645.50 53.30 0.00 0.00 0.00 0.00 0.00 00:17:33.645 =================================================================================================================== 00:17:33.645 Total : 13645.50 53.30 0.00 0.00 0.00 0.00 0.00 00:17:33.645 00:17:33.645 true 00:17:33.645 03:49:52 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:33.645 03:49:52 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:33.902 03:49:52 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:33.902 03:49:52 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:33.902 03:49:52 -- target/nvmf_lvs_grow.sh@65 -- # wait 2376154 00:17:34.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.471 Nvme0n1 : 3.00 13777.00 53.82 0.00 0.00 0.00 0.00 0.00 00:17:34.471 =================================================================================================================== 00:17:34.471 Total : 13777.00 53.82 0.00 0.00 0.00 0.00 0.00 00:17:34.471 00:17:35.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.407 Nvme0n1 : 4.00 13822.75 54.00 0.00 0.00 0.00 0.00 0.00 00:17:35.407 =================================================================================================================== 00:17:35.407 Total : 13822.75 54.00 0.00 0.00 0.00 0.00 0.00 00:17:35.407 00:17:36.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.784 Nvme0n1 : 5.00 13858.20 54.13 0.00 0.00 0.00 0.00 0.00 00:17:36.784 =================================================================================================================== 00:17:36.784 Total : 13858.20 54.13 0.00 0.00 0.00 0.00 0.00 00:17:36.784 00:17:37.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.720 Nvme0n1 : 6.00 13893.83 54.27 0.00 0.00 0.00 0.00 0.00 00:17:37.721 =================================================================================================================== 00:17:37.721 Total : 13893.83 54.27 0.00 0.00 0.00 0.00 0.00 00:17:37.721 00:17:38.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.655 Nvme0n1 : 7.00 13921.57 54.38 0.00 0.00 0.00 0.00 0.00 00:17:38.655 =================================================================================================================== 00:17:38.655 Total : 13921.57 54.38 0.00 0.00 0.00 0.00 0.00 00:17:38.655 00:17:39.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.594 Nvme0n1 : 8.00 13946.38 54.48 0.00 0.00 0.00 0.00 0.00 00:17:39.594 =================================================================================================================== 00:17:39.594 Total : 13946.38 54.48 0.00 0.00 0.00 0.00 0.00 00:17:39.594 00:17:40.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.528 Nvme0n1 : 9.00 13968.33 54.56 0.00 0.00 0.00 0.00 0.00 00:17:40.528 =================================================================================================================== 00:17:40.528 Total : 13968.33 54.56 0.00 0.00 0.00 0.00 0.00 00:17:40.528 00:17:41.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.462 Nvme0n1 : 10.00 13981.90 54.62 0.00 0.00 0.00 0.00 0.00 00:17:41.462 =================================================================================================================== 00:17:41.462 Total : 13981.90 54.62 0.00 0.00 0.00 0.00 0.00 00:17:41.462 00:17:41.462 00:17:41.462 Latency(us) 00:17:41.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.462 Nvme0n1 : 10.01 13981.45 54.62 0.00 0.00 9146.03 2936.98 12524.66 00:17:41.462 =================================================================================================================== 00:17:41.462 Total : 13981.45 54.62 0.00 0.00 9146.03 2936.98 12524.66 00:17:41.462 0 00:17:41.462 03:50:00 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2376005 00:17:41.462 03:50:00 -- common/autotest_common.sh@926 -- # '[' -z 2376005 ']' 00:17:41.462 03:50:00 -- common/autotest_common.sh@930 -- # kill -0 2376005 00:17:41.462 03:50:00 -- common/autotest_common.sh@931 -- # uname 00:17:41.462 03:50:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.462 03:50:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2376005 00:17:41.462 03:50:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:41.462 03:50:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:41.462 03:50:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2376005' 00:17:41.462 killing process with pid 2376005 00:17:41.462 03:50:00 -- common/autotest_common.sh@945 -- # kill 2376005 00:17:41.462 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.462 00:17:41.462 Latency(us) 00:17:41.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.462 =================================================================================================================== 00:17:41.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.462 03:50:00 -- common/autotest_common.sh@950 -- # wait 2376005 00:17:41.721 03:50:00 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:41.978 03:50:00 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:41.978 03:50:00 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:42.237 03:50:01 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:42.237 03:50:01 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:42.237 03:50:01 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2373321 00:17:42.237 03:50:01 -- target/nvmf_lvs_grow.sh@74 -- # wait 2373321 00:17:42.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2373321 Killed "${NVMF_APP[@]}" "$@" 00:17:42.237 03:50:01 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:42.237 03:50:01 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:42.237 03:50:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:42.237 03:50:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:42.237 03:50:01 -- common/autotest_common.sh@10 -- # set +x 00:17:42.237 03:50:01 -- nvmf/common.sh@469 -- # nvmfpid=2377514 00:17:42.237 03:50:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.237 03:50:01 -- nvmf/common.sh@470 -- # waitforlisten 2377514 00:17:42.237 03:50:01 -- common/autotest_common.sh@819 -- # '[' -z 2377514 ']' 00:17:42.237 03:50:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.237 03:50:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.237 03:50:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.237 03:50:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.237 03:50:01 -- common/autotest_common.sh@10 -- # set +x 00:17:42.495 [2024-07-14 03:50:01.196872] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:42.495 [2024-07-14 03:50:01.196968] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.495 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.495 [2024-07-14 03:50:01.265992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.495 [2024-07-14 03:50:01.351836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:42.495 [2024-07-14 03:50:01.352029] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.495 [2024-07-14 03:50:01.352047] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.495 [2024-07-14 03:50:01.352060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.495 [2024-07-14 03:50:01.352088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.428 03:50:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.428 03:50:02 -- common/autotest_common.sh@852 -- # return 0 00:17:43.428 03:50:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.428 03:50:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:43.428 03:50:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.428 03:50:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.428 03:50:02 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.687 [2024-07-14 03:50:02.416273] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:43.687 [2024-07-14 03:50:02.416406] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:43.687 [2024-07-14 03:50:02.416462] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:43.687 03:50:02 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:43.687 03:50:02 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 397f2a49-8984-4667-abaf-afc47dd3b5ad 00:17:43.687 03:50:02 -- common/autotest_common.sh@887 -- # local bdev_name=397f2a49-8984-4667-abaf-afc47dd3b5ad 00:17:43.687 03:50:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:43.687 03:50:02 -- common/autotest_common.sh@889 -- # local i 00:17:43.687 03:50:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:43.687 03:50:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:43.687 03:50:02 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:43.946 03:50:02 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 397f2a49-8984-4667-abaf-afc47dd3b5ad -t 2000 00:17:44.204 [ 00:17:44.204 { 00:17:44.204 "name": "397f2a49-8984-4667-abaf-afc47dd3b5ad", 00:17:44.204 "aliases": [ 00:17:44.204 "lvs/lvol" 00:17:44.204 ], 00:17:44.204 "product_name": "Logical Volume", 00:17:44.204 "block_size": 4096, 00:17:44.204 "num_blocks": 38912, 00:17:44.204 "uuid": "397f2a49-8984-4667-abaf-afc47dd3b5ad", 00:17:44.204 "assigned_rate_limits": { 00:17:44.204 "rw_ios_per_sec": 0, 00:17:44.204 "rw_mbytes_per_sec": 0, 00:17:44.204 "r_mbytes_per_sec": 0, 00:17:44.204 "w_mbytes_per_sec": 0 00:17:44.204 }, 00:17:44.204 "claimed": false, 00:17:44.204 "zoned": false, 00:17:44.204 "supported_io_types": { 00:17:44.204 "read": true, 00:17:44.204 "write": true, 00:17:44.204 "unmap": true, 00:17:44.204 "write_zeroes": true, 00:17:44.204 "flush": false, 00:17:44.204 "reset": true, 00:17:44.204 "compare": false, 00:17:44.204 "compare_and_write": false, 00:17:44.204 "abort": false, 00:17:44.204 "nvme_admin": false, 00:17:44.204 "nvme_io": false 00:17:44.204 }, 00:17:44.204 "driver_specific": { 00:17:44.204 "lvol": { 00:17:44.204 "lvol_store_uuid": "cb15c05c-55bf-4800-87b5-0776609ecd73", 00:17:44.204 "base_bdev": "aio_bdev", 00:17:44.204 "thin_provision": false, 00:17:44.204 "snapshot": false, 00:17:44.204 "clone": false, 00:17:44.204 "esnap_clone": false 00:17:44.204 } 00:17:44.204 } 00:17:44.204 } 00:17:44.204 ] 00:17:44.204 03:50:02 -- common/autotest_common.sh@895 -- # return 0 00:17:44.204 03:50:02 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:44.204 03:50:02 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:44.470 03:50:03 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:44.470 03:50:03 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:44.470 03:50:03 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:44.731 03:50:03 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:44.731 03:50:03 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:44.731 [2024-07-14 03:50:03.653311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:44.991 03:50:03 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:44.991 03:50:03 -- common/autotest_common.sh@640 -- # local es=0 00:17:44.991 03:50:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:44.991 03:50:03 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.991 03:50:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.991 03:50:03 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.991 03:50:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.991 03:50:03 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.991 03:50:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.991 03:50:03 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.991 03:50:03 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:44.991 03:50:03 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:44.991 request: 00:17:44.991 { 00:17:44.991 "uuid": "cb15c05c-55bf-4800-87b5-0776609ecd73", 00:17:44.991 "method": "bdev_lvol_get_lvstores", 00:17:44.991 "req_id": 1 00:17:44.991 } 00:17:44.991 Got JSON-RPC error response 00:17:44.991 response: 00:17:44.991 { 00:17:44.991 "code": -19, 00:17:44.991 "message": "No such device" 00:17:44.991 } 00:17:45.249 03:50:03 -- common/autotest_common.sh@643 -- # es=1 00:17:45.249 03:50:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:45.249 03:50:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:45.249 03:50:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:45.249 03:50:03 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.249 aio_bdev 00:17:45.249 03:50:04 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 397f2a49-8984-4667-abaf-afc47dd3b5ad 00:17:45.249 03:50:04 -- common/autotest_common.sh@887 -- # local bdev_name=397f2a49-8984-4667-abaf-afc47dd3b5ad 00:17:45.249 03:50:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.249 03:50:04 -- common/autotest_common.sh@889 -- # local i 00:17:45.249 03:50:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.249 03:50:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.249 03:50:04 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:45.507 03:50:04 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 397f2a49-8984-4667-abaf-afc47dd3b5ad -t 2000 00:17:45.802 [ 00:17:45.802 { 00:17:45.802 "name": "397f2a49-8984-4667-abaf-afc47dd3b5ad", 00:17:45.802 "aliases": [ 00:17:45.802 "lvs/lvol" 00:17:45.802 ], 00:17:45.802 "product_name": "Logical Volume", 00:17:45.802 "block_size": 4096, 00:17:45.802 "num_blocks": 38912, 00:17:45.803 "uuid": "397f2a49-8984-4667-abaf-afc47dd3b5ad", 00:17:45.803 "assigned_rate_limits": { 00:17:45.803 "rw_ios_per_sec": 0, 00:17:45.803 "rw_mbytes_per_sec": 0, 00:17:45.803 "r_mbytes_per_sec": 0, 00:17:45.803 "w_mbytes_per_sec": 0 00:17:45.803 }, 00:17:45.803 "claimed": false, 00:17:45.803 "zoned": false, 00:17:45.803 "supported_io_types": { 00:17:45.803 "read": true, 00:17:45.803 "write": true, 00:17:45.803 "unmap": true, 00:17:45.803 "write_zeroes": true, 00:17:45.803 "flush": false, 00:17:45.803 "reset": true, 00:17:45.803 "compare": false, 00:17:45.803 "compare_and_write": false, 00:17:45.803 "abort": false, 00:17:45.803 "nvme_admin": false, 00:17:45.803 "nvme_io": false 00:17:45.803 }, 00:17:45.803 "driver_specific": { 00:17:45.803 "lvol": { 00:17:45.803 "lvol_store_uuid": "cb15c05c-55bf-4800-87b5-0776609ecd73", 00:17:45.803 "base_bdev": "aio_bdev", 00:17:45.803 "thin_provision": false, 00:17:45.803 "snapshot": false, 00:17:45.803 "clone": false, 00:17:45.803 "esnap_clone": false 00:17:45.803 } 00:17:45.803 } 00:17:45.803 } 00:17:45.803 ] 00:17:45.803 03:50:04 -- common/autotest_common.sh@895 -- # return 0 00:17:45.803 03:50:04 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:45.803 03:50:04 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:46.083 03:50:04 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:46.083 03:50:04 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:46.083 03:50:04 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:46.342 03:50:05 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:46.342 03:50:05 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 397f2a49-8984-4667-abaf-afc47dd3b5ad 00:17:46.602 03:50:05 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cb15c05c-55bf-4800-87b5-0776609ecd73 00:17:46.860 03:50:05 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.118 03:50:05 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.118 00:17:47.119 real 0m19.765s 00:17:47.119 user 0m48.369s 00:17:47.119 sys 0m5.369s 00:17:47.119 03:50:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.119 03:50:05 -- common/autotest_common.sh@10 -- # set +x 00:17:47.119 ************************************ 00:17:47.119 END TEST lvs_grow_dirty 00:17:47.119 ************************************ 00:17:47.119 03:50:05 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:47.119 03:50:05 -- common/autotest_common.sh@796 -- # type=--id 00:17:47.119 03:50:05 -- common/autotest_common.sh@797 -- # id=0 00:17:47.119 03:50:05 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:47.119 03:50:05 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:47.119 03:50:05 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:47.119 03:50:05 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:47.119 03:50:05 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:47.119 03:50:05 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:47.119 nvmf_trace.0 00:17:47.119 03:50:05 -- common/autotest_common.sh@811 -- # return 0 00:17:47.119 03:50:05 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:47.119 03:50:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:47.119 03:50:05 -- nvmf/common.sh@116 -- # sync 00:17:47.119 03:50:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:47.119 03:50:05 -- nvmf/common.sh@119 -- # set +e 00:17:47.119 03:50:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:47.119 03:50:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:47.119 rmmod nvme_tcp 00:17:47.119 rmmod nvme_fabrics 00:17:47.119 rmmod nvme_keyring 00:17:47.119 03:50:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:47.119 03:50:05 -- nvmf/common.sh@123 -- # set -e 00:17:47.119 03:50:05 -- nvmf/common.sh@124 -- # return 0 00:17:47.119 03:50:05 -- nvmf/common.sh@477 -- # '[' -n 2377514 ']' 00:17:47.119 03:50:05 -- nvmf/common.sh@478 -- # killprocess 2377514 00:17:47.119 03:50:05 -- common/autotest_common.sh@926 -- # '[' -z 2377514 ']' 00:17:47.119 03:50:05 -- common/autotest_common.sh@930 -- # kill -0 2377514 00:17:47.119 03:50:05 -- common/autotest_common.sh@931 -- # uname 00:17:47.119 03:50:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.119 03:50:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2377514 00:17:47.119 03:50:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.119 03:50:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.119 03:50:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2377514' 00:17:47.119 killing process with pid 2377514 00:17:47.119 03:50:06 -- common/autotest_common.sh@945 -- # kill 2377514 00:17:47.119 03:50:06 -- common/autotest_common.sh@950 -- # wait 2377514 00:17:47.377 03:50:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:47.377 03:50:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:47.377 03:50:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:47.377 03:50:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.377 03:50:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:47.377 03:50:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.377 03:50:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.377 03:50:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.912 03:50:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:49.912 00:17:49.912 real 0m43.110s 00:17:49.912 user 1m11.829s 00:17:49.913 sys 0m9.031s 00:17:49.913 03:50:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.913 03:50:08 -- common/autotest_common.sh@10 -- # set +x 00:17:49.913 ************************************ 00:17:49.913 END TEST nvmf_lvs_grow 00:17:49.913 ************************************ 00:17:49.913 03:50:08 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:49.913 03:50:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:49.913 03:50:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:49.913 03:50:08 -- common/autotest_common.sh@10 -- # set +x 00:17:49.913 ************************************ 00:17:49.913 START TEST nvmf_bdev_io_wait 00:17:49.913 ************************************ 00:17:49.913 03:50:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:49.913 * Looking for test storage... 00:17:49.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.913 03:50:08 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.913 03:50:08 -- nvmf/common.sh@7 -- # uname -s 00:17:49.913 03:50:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.913 03:50:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.913 03:50:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.913 03:50:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.913 03:50:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.913 03:50:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.913 03:50:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.913 03:50:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.913 03:50:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.913 03:50:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.913 03:50:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.913 03:50:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.913 03:50:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.913 03:50:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.913 03:50:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.913 03:50:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.913 03:50:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.913 03:50:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.913 03:50:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.913 03:50:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.913 03:50:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.913 03:50:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.913 03:50:08 -- paths/export.sh@5 -- # export PATH 00:17:49.913 03:50:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.913 03:50:08 -- nvmf/common.sh@46 -- # : 0 00:17:49.913 03:50:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:49.913 03:50:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:49.913 03:50:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:49.913 03:50:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.913 03:50:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.913 03:50:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:49.913 03:50:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:49.913 03:50:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:49.913 03:50:08 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.913 03:50:08 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.913 03:50:08 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:49.913 03:50:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:49.913 03:50:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.913 03:50:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:49.913 03:50:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:49.913 03:50:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:49.913 03:50:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.913 03:50:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.913 03:50:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.913 03:50:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:49.913 03:50:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:49.913 03:50:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:49.913 03:50:08 -- common/autotest_common.sh@10 -- # set +x 00:17:51.816 03:50:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:51.816 03:50:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:51.816 03:50:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:51.816 03:50:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:51.816 03:50:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:51.816 03:50:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:51.816 03:50:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:51.816 03:50:10 -- nvmf/common.sh@294 -- # net_devs=() 00:17:51.816 03:50:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:51.816 03:50:10 -- nvmf/common.sh@295 -- # e810=() 00:17:51.816 03:50:10 -- nvmf/common.sh@295 -- # local -ga e810 00:17:51.816 03:50:10 -- nvmf/common.sh@296 -- # x722=() 00:17:51.816 03:50:10 -- nvmf/common.sh@296 -- # local -ga x722 00:17:51.817 03:50:10 -- nvmf/common.sh@297 -- # mlx=() 00:17:51.817 03:50:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:51.817 03:50:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.817 03:50:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:51.817 03:50:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:51.817 03:50:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:51.817 03:50:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.817 03:50:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:51.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:51.817 03:50:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.817 03:50:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:51.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:51.817 03:50:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:51.817 03:50:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.817 03:50:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.817 03:50:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.817 03:50:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.817 03:50:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:51.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:51.817 03:50:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.817 03:50:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.817 03:50:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.817 03:50:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.817 03:50:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.817 03:50:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:51.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:51.817 03:50:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.817 03:50:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:51.817 03:50:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:51.817 03:50:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:51.817 03:50:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.817 03:50:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.817 03:50:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.817 03:50:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:51.817 03:50:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.817 03:50:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.817 03:50:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:51.817 03:50:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.817 03:50:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.817 03:50:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:51.817 03:50:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:51.817 03:50:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.817 03:50:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.817 03:50:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.817 03:50:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.817 03:50:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:51.817 03:50:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.817 03:50:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.817 03:50:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.817 03:50:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:51.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:17:51.817 00:17:51.817 --- 10.0.0.2 ping statistics --- 00:17:51.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.817 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:51.817 03:50:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:17:51.817 00:17:51.817 --- 10.0.0.1 ping statistics --- 00:17:51.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.817 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:17:51.817 03:50:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.817 03:50:10 -- nvmf/common.sh@410 -- # return 0 00:17:51.817 03:50:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:51.817 03:50:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.817 03:50:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:51.817 03:50:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.817 03:50:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:51.817 03:50:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.817 03:50:10 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:51.817 03:50:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.817 03:50:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:51.817 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:51.817 03:50:10 -- nvmf/common.sh@469 -- # nvmfpid=2380072 00:17:51.817 03:50:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:51.817 03:50:10 -- nvmf/common.sh@470 -- # waitforlisten 2380072 00:17:51.817 03:50:10 -- common/autotest_common.sh@819 -- # '[' -z 2380072 ']' 00:17:51.817 03:50:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.817 03:50:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.817 03:50:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.817 03:50:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.817 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:51.817 [2024-07-14 03:50:10.568438] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:51.817 [2024-07-14 03:50:10.568511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.817 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.817 [2024-07-14 03:50:10.634229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.817 [2024-07-14 03:50:10.721426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.817 [2024-07-14 03:50:10.721580] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.817 [2024-07-14 03:50:10.721597] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.817 [2024-07-14 03:50:10.721610] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.817 [2024-07-14 03:50:10.721665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.817 [2024-07-14 03:50:10.721724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.817 [2024-07-14 03:50:10.721789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.817 [2024-07-14 03:50:10.721791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.077 03:50:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.077 03:50:10 -- common/autotest_common.sh@852 -- # return 0 00:17:52.077 03:50:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.077 03:50:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:52.077 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 03:50:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.077 03:50:10 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:52.077 03:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.077 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 03:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.077 03:50:10 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:52.077 03:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.077 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 03:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.077 03:50:10 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.077 03:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.077 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 [2024-07-14 03:50:10.877417] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.077 03:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.077 03:50:10 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:52.077 03:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.077 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 Malloc0 00:17:52.077 03:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.077 03:50:10 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:52.077 03:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.077 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 03:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.077 03:50:10 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.077 03:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.077 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 03:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.078 03:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.078 03:50:10 -- common/autotest_common.sh@10 -- # set +x 00:17:52.078 [2024-07-14 03:50:10.940501] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.078 03:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2380106 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@30 -- # READ_PID=2380107 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # config=() 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2380110 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:52.078 03:50:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.078 { 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme$subsystem", 00:17:52.078 "trtype": "$TEST_TRANSPORT", 00:17:52.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "$NVMF_PORT", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.078 "hdgst": ${hdgst:-false}, 00:17:52.078 "ddgst": ${ddgst:-false} 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 } 00:17:52.078 EOF 00:17:52.078 )") 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # config=() 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.078 03:50:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2380112 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.078 { 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme$subsystem", 00:17:52.078 "trtype": "$TEST_TRANSPORT", 00:17:52.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "$NVMF_PORT", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.078 "hdgst": ${hdgst:-false}, 00:17:52.078 "ddgst": ${ddgst:-false} 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 } 00:17:52.078 EOF 00:17:52.078 )") 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@35 -- # sync 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # config=() 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.078 03:50:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.078 { 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme$subsystem", 00:17:52.078 "trtype": "$TEST_TRANSPORT", 00:17:52.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "$NVMF_PORT", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.078 "hdgst": ${hdgst:-false}, 00:17:52.078 "ddgst": ${ddgst:-false} 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 } 00:17:52.078 EOF 00:17:52.078 )") 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # cat 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # config=() 00:17:52.078 03:50:10 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # cat 00:17:52.078 03:50:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.078 { 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme$subsystem", 00:17:52.078 "trtype": "$TEST_TRANSPORT", 00:17:52.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "$NVMF_PORT", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.078 "hdgst": ${hdgst:-false}, 00:17:52.078 "ddgst": ${ddgst:-false} 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 } 00:17:52.078 EOF 00:17:52.078 )") 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # cat 00:17:52.078 03:50:10 -- target/bdev_io_wait.sh@37 -- # wait 2380106 00:17:52.078 03:50:10 -- nvmf/common.sh@542 -- # cat 00:17:52.078 03:50:10 -- nvmf/common.sh@544 -- # jq . 00:17:52.078 03:50:10 -- nvmf/common.sh@544 -- # jq . 00:17:52.078 03:50:10 -- nvmf/common.sh@544 -- # jq . 00:17:52.078 03:50:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.078 03:50:10 -- nvmf/common.sh@544 -- # jq . 00:17:52.078 03:50:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme1", 00:17:52.078 "trtype": "tcp", 00:17:52.078 "traddr": "10.0.0.2", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "4420", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.078 "hdgst": false, 00:17:52.078 "ddgst": false 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 }' 00:17:52.078 03:50:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.078 03:50:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme1", 00:17:52.078 "trtype": "tcp", 00:17:52.078 "traddr": "10.0.0.2", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "4420", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.078 "hdgst": false, 00:17:52.078 "ddgst": false 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 }' 00:17:52.078 03:50:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.078 03:50:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme1", 00:17:52.078 "trtype": "tcp", 00:17:52.078 "traddr": "10.0.0.2", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "4420", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.078 "hdgst": false, 00:17:52.078 "ddgst": false 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 }' 00:17:52.078 03:50:10 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.078 03:50:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.078 "params": { 00:17:52.078 "name": "Nvme1", 00:17:52.078 "trtype": "tcp", 00:17:52.078 "traddr": "10.0.0.2", 00:17:52.078 "adrfam": "ipv4", 00:17:52.078 "trsvcid": "4420", 00:17:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.078 "hdgst": false, 00:17:52.078 "ddgst": false 00:17:52.078 }, 00:17:52.078 "method": "bdev_nvme_attach_controller" 00:17:52.078 }' 00:17:52.078 [2024-07-14 03:50:10.986248] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:52.078 [2024-07-14 03:50:10.986243] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:52.078 [2024-07-14 03:50:10.986244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:52.078 [2024-07-14 03:50:10.986248] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:52.078 [2024-07-14 03:50:10.986338] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 03:50:10.986338] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 03:50:10.986339] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 03:50:10.986340] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:52.078 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:52.078 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:52.078 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:52.336 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.336 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.336 [2024-07-14 03:50:11.170571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.336 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.336 [2024-07-14 03:50:11.243080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:52.336 [2024-07-14 03:50:11.269402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.595 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.595 [2024-07-14 03:50:11.342071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:52.595 [2024-07-14 03:50:11.373184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.595 [2024-07-14 03:50:11.447054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.595 [2024-07-14 03:50:11.450131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:52.595 [2024-07-14 03:50:11.516304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:52.853 Running I/O for 1 seconds... 00:17:52.853 Running I/O for 1 seconds... 00:17:52.853 Running I/O for 1 seconds... 00:17:52.853 Running I/O for 1 seconds... 00:17:53.787 00:17:53.787 Latency(us) 00:17:53.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.787 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:53.787 Nvme1n1 : 1.02 6648.26 25.97 0.00 0.00 19093.65 6699.24 24369.68 00:17:53.787 =================================================================================================================== 00:17:53.787 Total : 6648.26 25.97 0.00 0.00 19093.65 6699.24 24369.68 00:17:53.787 00:17:53.787 Latency(us) 00:17:53.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.787 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:53.787 Nvme1n1 : 1.00 200163.75 781.89 0.00 0.00 637.02 259.41 831.34 00:17:53.787 =================================================================================================================== 00:17:53.787 Total : 200163.75 781.89 0.00 0.00 637.02 259.41 831.34 00:17:53.787 00:17:53.787 Latency(us) 00:17:53.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.787 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:53.787 Nvme1n1 : 1.01 6750.26 26.37 0.00 0.00 18907.60 5097.24 29127.11 00:17:53.787 =================================================================================================================== 00:17:53.787 Total : 6750.26 26.37 0.00 0.00 18907.60 5097.24 29127.11 00:17:54.046 00:17:54.046 Latency(us) 00:17:54.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.046 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:54.046 Nvme1n1 : 1.00 11690.70 45.67 0.00 0.00 10924.04 4077.80 25826.04 00:17:54.046 =================================================================================================================== 00:17:54.046 Total : 11690.70 45.67 0.00 0.00 10924.04 4077.80 25826.04 00:17:54.046 03:50:12 -- target/bdev_io_wait.sh@38 -- # wait 2380107 00:17:54.305 03:50:13 -- target/bdev_io_wait.sh@39 -- # wait 2380110 00:17:54.305 03:50:13 -- target/bdev_io_wait.sh@40 -- # wait 2380112 00:17:54.305 03:50:13 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.305 03:50:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.305 03:50:13 -- common/autotest_common.sh@10 -- # set +x 00:17:54.305 03:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.305 03:50:13 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:54.305 03:50:13 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:54.305 03:50:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:54.305 03:50:13 -- nvmf/common.sh@116 -- # sync 00:17:54.305 03:50:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:54.305 03:50:13 -- nvmf/common.sh@119 -- # set +e 00:17:54.305 03:50:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:54.305 03:50:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:54.305 rmmod nvme_tcp 00:17:54.305 rmmod nvme_fabrics 00:17:54.305 rmmod nvme_keyring 00:17:54.305 03:50:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:54.305 03:50:13 -- nvmf/common.sh@123 -- # set -e 00:17:54.305 03:50:13 -- nvmf/common.sh@124 -- # return 0 00:17:54.305 03:50:13 -- nvmf/common.sh@477 -- # '[' -n 2380072 ']' 00:17:54.305 03:50:13 -- nvmf/common.sh@478 -- # killprocess 2380072 00:17:54.305 03:50:13 -- common/autotest_common.sh@926 -- # '[' -z 2380072 ']' 00:17:54.305 03:50:13 -- common/autotest_common.sh@930 -- # kill -0 2380072 00:17:54.305 03:50:13 -- common/autotest_common.sh@931 -- # uname 00:17:54.305 03:50:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.305 03:50:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2380072 00:17:54.305 03:50:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:54.305 03:50:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:54.305 03:50:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2380072' 00:17:54.305 killing process with pid 2380072 00:17:54.305 03:50:13 -- common/autotest_common.sh@945 -- # kill 2380072 00:17:54.305 03:50:13 -- common/autotest_common.sh@950 -- # wait 2380072 00:17:54.563 03:50:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:54.563 03:50:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:54.563 03:50:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:54.563 03:50:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.563 03:50:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:54.563 03:50:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.563 03:50:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.563 03:50:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.095 03:50:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:57.095 00:17:57.095 real 0m7.137s 00:17:57.095 user 0m15.806s 00:17:57.095 sys 0m3.397s 00:17:57.095 03:50:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.095 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:17:57.096 ************************************ 00:17:57.096 END TEST nvmf_bdev_io_wait 00:17:57.096 ************************************ 00:17:57.096 03:50:15 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:57.096 03:50:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:57.096 03:50:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:57.096 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:17:57.096 ************************************ 00:17:57.096 START TEST nvmf_queue_depth 00:17:57.096 ************************************ 00:17:57.096 03:50:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:57.096 * Looking for test storage... 00:17:57.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.096 03:50:15 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.096 03:50:15 -- nvmf/common.sh@7 -- # uname -s 00:17:57.096 03:50:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.096 03:50:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.096 03:50:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.096 03:50:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.096 03:50:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.096 03:50:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.096 03:50:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.096 03:50:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.096 03:50:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.096 03:50:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.096 03:50:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.096 03:50:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.096 03:50:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.096 03:50:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.096 03:50:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.096 03:50:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.096 03:50:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.096 03:50:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.096 03:50:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.096 03:50:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.096 03:50:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.096 03:50:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.096 03:50:15 -- paths/export.sh@5 -- # export PATH 00:17:57.096 03:50:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.096 03:50:15 -- nvmf/common.sh@46 -- # : 0 00:17:57.096 03:50:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:57.096 03:50:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:57.096 03:50:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:57.096 03:50:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.096 03:50:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.096 03:50:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:57.096 03:50:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:57.096 03:50:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:57.096 03:50:15 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:57.096 03:50:15 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:57.096 03:50:15 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.096 03:50:15 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:57.096 03:50:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:57.096 03:50:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.096 03:50:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:57.096 03:50:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:57.096 03:50:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:57.096 03:50:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.096 03:50:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.096 03:50:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.096 03:50:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:57.096 03:50:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:57.096 03:50:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:57.096 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:17:59.001 03:50:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:59.001 03:50:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:59.001 03:50:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:59.001 03:50:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:59.001 03:50:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:59.001 03:50:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:59.001 03:50:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:59.001 03:50:17 -- nvmf/common.sh@294 -- # net_devs=() 00:17:59.001 03:50:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:59.001 03:50:17 -- nvmf/common.sh@295 -- # e810=() 00:17:59.001 03:50:17 -- nvmf/common.sh@295 -- # local -ga e810 00:17:59.001 03:50:17 -- nvmf/common.sh@296 -- # x722=() 00:17:59.001 03:50:17 -- nvmf/common.sh@296 -- # local -ga x722 00:17:59.002 03:50:17 -- nvmf/common.sh@297 -- # mlx=() 00:17:59.002 03:50:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:59.002 03:50:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.002 03:50:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:59.002 03:50:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:59.002 03:50:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:59.002 03:50:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.002 03:50:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:59.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:59.002 03:50:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.002 03:50:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:59.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:59.002 03:50:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:59.002 03:50:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.002 03:50:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.002 03:50:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.002 03:50:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.002 03:50:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:59.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:59.002 03:50:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.002 03:50:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.002 03:50:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.002 03:50:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.002 03:50:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.002 03:50:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:59.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:59.002 03:50:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.002 03:50:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:59.002 03:50:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:59.002 03:50:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:59.002 03:50:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.002 03:50:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.002 03:50:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.002 03:50:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:59.002 03:50:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.002 03:50:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.002 03:50:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:59.002 03:50:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.002 03:50:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.002 03:50:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:59.002 03:50:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:59.002 03:50:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.002 03:50:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.002 03:50:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.002 03:50:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.002 03:50:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:59.002 03:50:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.002 03:50:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.002 03:50:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.002 03:50:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:59.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:17:59.002 00:17:59.002 --- 10.0.0.2 ping statistics --- 00:17:59.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.002 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:59.002 03:50:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:17:59.002 00:17:59.002 --- 10.0.0.1 ping statistics --- 00:17:59.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.002 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:17:59.002 03:50:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.002 03:50:17 -- nvmf/common.sh@410 -- # return 0 00:17:59.002 03:50:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:59.002 03:50:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.002 03:50:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:59.002 03:50:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.002 03:50:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:59.002 03:50:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:59.002 03:50:17 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:59.002 03:50:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:59.002 03:50:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:59.002 03:50:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.002 03:50:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.002 03:50:17 -- nvmf/common.sh@469 -- # nvmfpid=2382349 00:17:59.002 03:50:17 -- nvmf/common.sh@470 -- # waitforlisten 2382349 00:17:59.002 03:50:17 -- common/autotest_common.sh@819 -- # '[' -z 2382349 ']' 00:17:59.002 03:50:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.002 03:50:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:59.002 03:50:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.002 03:50:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:59.002 03:50:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.002 [2024-07-14 03:50:17.691710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:59.002 [2024-07-14 03:50:17.691791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.002 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.002 [2024-07-14 03:50:17.768173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.002 [2024-07-14 03:50:17.860064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:59.002 [2024-07-14 03:50:17.860206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.002 [2024-07-14 03:50:17.860224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.002 [2024-07-14 03:50:17.860237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.002 [2024-07-14 03:50:17.860263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.934 03:50:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:59.934 03:50:18 -- common/autotest_common.sh@852 -- # return 0 00:17:59.934 03:50:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:59.934 03:50:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:59.934 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.934 03:50:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.934 03:50:18 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.935 03:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.935 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.935 [2024-07-14 03:50:18.688764] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.935 03:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.935 03:50:18 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:59.935 03:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.935 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.935 Malloc0 00:17:59.935 03:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.935 03:50:18 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.935 03:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.935 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.935 03:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.935 03:50:18 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.935 03:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.935 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.935 03:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.935 03:50:18 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.935 03:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.935 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.935 [2024-07-14 03:50:18.751969] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.935 03:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.935 03:50:18 -- target/queue_depth.sh@30 -- # bdevperf_pid=2382506 00:17:59.935 03:50:18 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:59.935 03:50:18 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.935 03:50:18 -- target/queue_depth.sh@33 -- # waitforlisten 2382506 /var/tmp/bdevperf.sock 00:17:59.935 03:50:18 -- common/autotest_common.sh@819 -- # '[' -z 2382506 ']' 00:17:59.935 03:50:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.935 03:50:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:59.935 03:50:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.935 03:50:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:59.935 03:50:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.935 [2024-07-14 03:50:18.795385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:59.935 [2024-07-14 03:50:18.795459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382506 ] 00:17:59.935 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.935 [2024-07-14 03:50:18.857346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.193 [2024-07-14 03:50:18.948643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.127 03:50:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.127 03:50:19 -- common/autotest_common.sh@852 -- # return 0 00:18:01.127 03:50:19 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:01.127 03:50:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.127 03:50:19 -- common/autotest_common.sh@10 -- # set +x 00:18:01.127 NVMe0n1 00:18:01.127 03:50:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.127 03:50:19 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.386 Running I/O for 10 seconds... 00:18:11.410 00:18:11.410 Latency(us) 00:18:11.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.410 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:11.410 Verification LBA range: start 0x0 length 0x4000 00:18:11.410 NVMe0n1 : 10.07 12343.38 48.22 0.00 0.00 82650.93 14563.56 60196.03 00:18:11.410 =================================================================================================================== 00:18:11.410 Total : 12343.38 48.22 0.00 0.00 82650.93 14563.56 60196.03 00:18:11.410 0 00:18:11.410 03:50:30 -- target/queue_depth.sh@39 -- # killprocess 2382506 00:18:11.410 03:50:30 -- common/autotest_common.sh@926 -- # '[' -z 2382506 ']' 00:18:11.410 03:50:30 -- common/autotest_common.sh@930 -- # kill -0 2382506 00:18:11.410 03:50:30 -- common/autotest_common.sh@931 -- # uname 00:18:11.410 03:50:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:11.410 03:50:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2382506 00:18:11.410 03:50:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:11.410 03:50:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:11.410 03:50:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2382506' 00:18:11.410 killing process with pid 2382506 00:18:11.410 03:50:30 -- common/autotest_common.sh@945 -- # kill 2382506 00:18:11.410 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.410 00:18:11.410 Latency(us) 00:18:11.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.410 =================================================================================================================== 00:18:11.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.410 03:50:30 -- common/autotest_common.sh@950 -- # wait 2382506 00:18:11.668 03:50:30 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:11.668 03:50:30 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:11.668 03:50:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:11.668 03:50:30 -- nvmf/common.sh@116 -- # sync 00:18:11.668 03:50:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:11.668 03:50:30 -- nvmf/common.sh@119 -- # set +e 00:18:11.668 03:50:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:11.668 03:50:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:11.668 rmmod nvme_tcp 00:18:11.668 rmmod nvme_fabrics 00:18:11.668 rmmod nvme_keyring 00:18:11.668 03:50:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:11.668 03:50:30 -- nvmf/common.sh@123 -- # set -e 00:18:11.668 03:50:30 -- nvmf/common.sh@124 -- # return 0 00:18:11.668 03:50:30 -- nvmf/common.sh@477 -- # '[' -n 2382349 ']' 00:18:11.668 03:50:30 -- nvmf/common.sh@478 -- # killprocess 2382349 00:18:11.668 03:50:30 -- common/autotest_common.sh@926 -- # '[' -z 2382349 ']' 00:18:11.668 03:50:30 -- common/autotest_common.sh@930 -- # kill -0 2382349 00:18:11.668 03:50:30 -- common/autotest_common.sh@931 -- # uname 00:18:11.668 03:50:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:11.668 03:50:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2382349 00:18:11.668 03:50:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:11.668 03:50:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:11.668 03:50:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2382349' 00:18:11.668 killing process with pid 2382349 00:18:11.668 03:50:30 -- common/autotest_common.sh@945 -- # kill 2382349 00:18:11.668 03:50:30 -- common/autotest_common.sh@950 -- # wait 2382349 00:18:11.928 03:50:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.928 03:50:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.928 03:50:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.928 03:50:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.928 03:50:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.928 03:50:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.928 03:50:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.928 03:50:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.486 03:50:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:14.486 00:18:14.486 real 0m17.337s 00:18:14.486 user 0m24.873s 00:18:14.486 sys 0m3.208s 00:18:14.486 03:50:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.486 03:50:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.486 ************************************ 00:18:14.486 END TEST nvmf_queue_depth 00:18:14.486 ************************************ 00:18:14.486 03:50:32 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:14.486 03:50:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:14.486 03:50:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:14.486 03:50:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.486 ************************************ 00:18:14.486 START TEST nvmf_multipath 00:18:14.486 ************************************ 00:18:14.486 03:50:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:14.486 * Looking for test storage... 00:18:14.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.486 03:50:32 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.486 03:50:32 -- nvmf/common.sh@7 -- # uname -s 00:18:14.486 03:50:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.486 03:50:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.486 03:50:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.486 03:50:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.486 03:50:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.486 03:50:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.486 03:50:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.486 03:50:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.486 03:50:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.486 03:50:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.486 03:50:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.486 03:50:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.486 03:50:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.486 03:50:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.486 03:50:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.486 03:50:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.486 03:50:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.486 03:50:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.486 03:50:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.486 03:50:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.486 03:50:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.486 03:50:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.486 03:50:32 -- paths/export.sh@5 -- # export PATH 00:18:14.486 03:50:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.486 03:50:32 -- nvmf/common.sh@46 -- # : 0 00:18:14.486 03:50:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:14.487 03:50:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:14.487 03:50:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:14.487 03:50:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.487 03:50:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.487 03:50:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:14.487 03:50:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:14.487 03:50:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:14.487 03:50:32 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.487 03:50:32 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.487 03:50:32 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:14.487 03:50:32 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.487 03:50:32 -- target/multipath.sh@43 -- # nvmftestinit 00:18:14.487 03:50:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:14.487 03:50:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.487 03:50:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:14.487 03:50:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:14.487 03:50:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:14.487 03:50:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.487 03:50:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.487 03:50:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.487 03:50:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:14.487 03:50:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:14.487 03:50:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:14.487 03:50:32 -- common/autotest_common.sh@10 -- # set +x 00:18:15.867 03:50:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:15.867 03:50:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:15.867 03:50:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:15.867 03:50:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:15.867 03:50:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:15.867 03:50:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:15.867 03:50:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:15.867 03:50:34 -- nvmf/common.sh@294 -- # net_devs=() 00:18:15.867 03:50:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:15.867 03:50:34 -- nvmf/common.sh@295 -- # e810=() 00:18:15.867 03:50:34 -- nvmf/common.sh@295 -- # local -ga e810 00:18:15.867 03:50:34 -- nvmf/common.sh@296 -- # x722=() 00:18:15.867 03:50:34 -- nvmf/common.sh@296 -- # local -ga x722 00:18:15.867 03:50:34 -- nvmf/common.sh@297 -- # mlx=() 00:18:15.867 03:50:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:15.867 03:50:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.867 03:50:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:15.867 03:50:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:15.867 03:50:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:15.867 03:50:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:15.867 03:50:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:15.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:15.867 03:50:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:15.867 03:50:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:15.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:15.867 03:50:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:15.867 03:50:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:15.867 03:50:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.867 03:50:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:15.867 03:50:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.867 03:50:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:15.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:15.867 03:50:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.867 03:50:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:15.867 03:50:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.867 03:50:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:15.867 03:50:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.867 03:50:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:15.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:15.867 03:50:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.867 03:50:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:15.867 03:50:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:15.867 03:50:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:15.867 03:50:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:15.867 03:50:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.867 03:50:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.867 03:50:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.867 03:50:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:15.867 03:50:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.867 03:50:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.867 03:50:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:15.867 03:50:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.867 03:50:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.867 03:50:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:15.867 03:50:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:15.867 03:50:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.867 03:50:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.125 03:50:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.125 03:50:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.125 03:50:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:16.125 03:50:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.125 03:50:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.125 03:50:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.125 03:50:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:16.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:18:16.125 00:18:16.125 --- 10.0.0.2 ping statistics --- 00:18:16.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.125 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:16.125 03:50:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:16.125 00:18:16.125 --- 10.0.0.1 ping statistics --- 00:18:16.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.125 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:16.125 03:50:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.125 03:50:34 -- nvmf/common.sh@410 -- # return 0 00:18:16.125 03:50:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.125 03:50:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.125 03:50:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.125 03:50:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.125 03:50:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.125 03:50:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.125 03:50:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.125 03:50:34 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:16.125 03:50:34 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:16.125 only one NIC for nvmf test 00:18:16.125 03:50:34 -- target/multipath.sh@47 -- # nvmftestfini 00:18:16.125 03:50:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:16.126 03:50:34 -- nvmf/common.sh@116 -- # sync 00:18:16.126 03:50:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:16.126 03:50:34 -- nvmf/common.sh@119 -- # set +e 00:18:16.126 03:50:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:16.126 03:50:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:16.126 rmmod nvme_tcp 00:18:16.126 rmmod nvme_fabrics 00:18:16.126 rmmod nvme_keyring 00:18:16.126 03:50:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:16.126 03:50:34 -- nvmf/common.sh@123 -- # set -e 00:18:16.126 03:50:34 -- nvmf/common.sh@124 -- # return 0 00:18:16.126 03:50:34 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:16.126 03:50:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:16.126 03:50:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:16.126 03:50:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:16.126 03:50:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.126 03:50:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:16.126 03:50:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.126 03:50:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.126 03:50:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.671 03:50:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.671 03:50:36 -- target/multipath.sh@48 -- # exit 0 00:18:18.671 03:50:36 -- target/multipath.sh@1 -- # nvmftestfini 00:18:18.671 03:50:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.671 03:50:36 -- nvmf/common.sh@116 -- # sync 00:18:18.671 03:50:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.671 03:50:36 -- nvmf/common.sh@119 -- # set +e 00:18:18.671 03:50:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.671 03:50:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.671 03:50:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.671 03:50:37 -- nvmf/common.sh@123 -- # set -e 00:18:18.671 03:50:37 -- nvmf/common.sh@124 -- # return 0 00:18:18.671 03:50:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:18.671 03:50:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:18.671 03:50:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:18.671 03:50:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:18.671 03:50:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.671 03:50:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:18.671 03:50:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.671 03:50:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.671 03:50:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.671 03:50:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.671 00:18:18.671 real 0m4.196s 00:18:18.671 user 0m0.753s 00:18:18.671 sys 0m1.412s 00:18:18.671 03:50:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.671 03:50:37 -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 ************************************ 00:18:18.671 END TEST nvmf_multipath 00:18:18.671 ************************************ 00:18:18.671 03:50:37 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.671 03:50:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:18.671 03:50:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:18.671 03:50:37 -- common/autotest_common.sh@10 -- # set +x 00:18:18.671 ************************************ 00:18:18.671 START TEST nvmf_zcopy 00:18:18.671 ************************************ 00:18:18.671 03:50:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:18.671 * Looking for test storage... 00:18:18.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.671 03:50:37 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.671 03:50:37 -- nvmf/common.sh@7 -- # uname -s 00:18:18.671 03:50:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.671 03:50:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.671 03:50:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.671 03:50:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.671 03:50:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.671 03:50:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.671 03:50:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.671 03:50:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.671 03:50:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.671 03:50:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.671 03:50:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.671 03:50:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:18.671 03:50:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.671 03:50:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.671 03:50:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.671 03:50:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.671 03:50:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.671 03:50:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.671 03:50:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.671 03:50:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.671 03:50:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.671 03:50:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.671 03:50:37 -- paths/export.sh@5 -- # export PATH 00:18:18.671 03:50:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.671 03:50:37 -- nvmf/common.sh@46 -- # : 0 00:18:18.671 03:50:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:18.671 03:50:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:18.671 03:50:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:18.671 03:50:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.671 03:50:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.671 03:50:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:18.671 03:50:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:18.671 03:50:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:18.671 03:50:37 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:18.671 03:50:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:18.671 03:50:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.671 03:50:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:18.671 03:50:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:18.671 03:50:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:18.671 03:50:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.671 03:50:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.671 03:50:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.671 03:50:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:18.671 03:50:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:18.671 03:50:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:18.671 03:50:37 -- common/autotest_common.sh@10 -- # set +x 00:18:20.575 03:50:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:20.575 03:50:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:20.575 03:50:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:20.575 03:50:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:20.575 03:50:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:20.575 03:50:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:20.575 03:50:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:20.575 03:50:39 -- nvmf/common.sh@294 -- # net_devs=() 00:18:20.575 03:50:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:20.575 03:50:39 -- nvmf/common.sh@295 -- # e810=() 00:18:20.575 03:50:39 -- nvmf/common.sh@295 -- # local -ga e810 00:18:20.575 03:50:39 -- nvmf/common.sh@296 -- # x722=() 00:18:20.575 03:50:39 -- nvmf/common.sh@296 -- # local -ga x722 00:18:20.575 03:50:39 -- nvmf/common.sh@297 -- # mlx=() 00:18:20.575 03:50:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:20.575 03:50:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.575 03:50:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.575 03:50:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.576 03:50:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:20.576 03:50:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:20.576 03:50:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:20.576 03:50:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:20.576 03:50:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:20.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:20.576 03:50:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:20.576 03:50:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:20.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:20.576 03:50:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:20.576 03:50:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:20.576 03:50:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.576 03:50:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:20.576 03:50:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.576 03:50:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:20.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:20.576 03:50:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.576 03:50:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:20.576 03:50:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.576 03:50:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:20.576 03:50:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.576 03:50:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:20.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:20.576 03:50:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.576 03:50:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:20.576 03:50:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:20.576 03:50:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:20.576 03:50:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.576 03:50:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.576 03:50:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.576 03:50:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:20.576 03:50:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.576 03:50:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.576 03:50:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:20.576 03:50:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.576 03:50:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.576 03:50:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:20.576 03:50:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:20.576 03:50:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.576 03:50:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.576 03:50:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.576 03:50:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.576 03:50:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:20.576 03:50:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.576 03:50:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.576 03:50:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.576 03:50:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:20.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:18:20.576 00:18:20.576 --- 10.0.0.2 ping statistics --- 00:18:20.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.576 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:20.576 03:50:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:18:20.576 00:18:20.576 --- 10.0.0.1 ping statistics --- 00:18:20.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.576 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:18:20.576 03:50:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.576 03:50:39 -- nvmf/common.sh@410 -- # return 0 00:18:20.576 03:50:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:20.576 03:50:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.576 03:50:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:20.576 03:50:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.576 03:50:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:20.576 03:50:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:20.576 03:50:39 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:20.576 03:50:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:20.576 03:50:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:20.576 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:18:20.576 03:50:39 -- nvmf/common.sh@469 -- # nvmfpid=2387772 00:18:20.576 03:50:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.576 03:50:39 -- nvmf/common.sh@470 -- # waitforlisten 2387772 00:18:20.576 03:50:39 -- common/autotest_common.sh@819 -- # '[' -z 2387772 ']' 00:18:20.576 03:50:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.576 03:50:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:20.576 03:50:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.576 03:50:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:20.576 03:50:39 -- common/autotest_common.sh@10 -- # set +x 00:18:20.576 [2024-07-14 03:50:39.370879] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:20.576 [2024-07-14 03:50:39.370981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.576 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.576 [2024-07-14 03:50:39.436053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.836 [2024-07-14 03:50:39.524011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:20.836 [2024-07-14 03:50:39.524139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.836 [2024-07-14 03:50:39.524156] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.836 [2024-07-14 03:50:39.524169] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.836 [2024-07-14 03:50:39.524196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.403 03:50:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:21.403 03:50:40 -- common/autotest_common.sh@852 -- # return 0 00:18:21.403 03:50:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:21.403 03:50:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:21.403 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:18:21.403 03:50:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.403 03:50:40 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:21.403 03:50:40 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:21.403 03:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.403 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:18:21.403 [2024-07-14 03:50:40.339546] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.662 03:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.662 03:50:40 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:21.662 03:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.662 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:18:21.662 03:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.662 03:50:40 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.662 03:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.662 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:18:21.662 [2024-07-14 03:50:40.355718] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.662 03:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.662 03:50:40 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:21.662 03:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.662 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:18:21.662 03:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.662 03:50:40 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:21.662 03:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.662 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:18:21.662 malloc0 00:18:21.662 03:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.662 03:50:40 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.662 03:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:21.662 03:50:40 -- common/autotest_common.sh@10 -- # set +x 00:18:21.662 03:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:21.662 03:50:40 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:21.662 03:50:40 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:21.662 03:50:40 -- nvmf/common.sh@520 -- # config=() 00:18:21.662 03:50:40 -- nvmf/common.sh@520 -- # local subsystem config 00:18:21.662 03:50:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:21.662 03:50:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:21.662 { 00:18:21.662 "params": { 00:18:21.662 "name": "Nvme$subsystem", 00:18:21.662 "trtype": "$TEST_TRANSPORT", 00:18:21.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.662 "adrfam": "ipv4", 00:18:21.662 "trsvcid": "$NVMF_PORT", 00:18:21.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.662 "hdgst": ${hdgst:-false}, 00:18:21.662 "ddgst": ${ddgst:-false} 00:18:21.662 }, 00:18:21.662 "method": "bdev_nvme_attach_controller" 00:18:21.662 } 00:18:21.662 EOF 00:18:21.662 )") 00:18:21.662 03:50:40 -- nvmf/common.sh@542 -- # cat 00:18:21.662 03:50:40 -- nvmf/common.sh@544 -- # jq . 00:18:21.662 03:50:40 -- nvmf/common.sh@545 -- # IFS=, 00:18:21.662 03:50:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:21.662 "params": { 00:18:21.662 "name": "Nvme1", 00:18:21.662 "trtype": "tcp", 00:18:21.662 "traddr": "10.0.0.2", 00:18:21.662 "adrfam": "ipv4", 00:18:21.662 "trsvcid": "4420", 00:18:21.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.662 "hdgst": false, 00:18:21.662 "ddgst": false 00:18:21.662 }, 00:18:21.662 "method": "bdev_nvme_attach_controller" 00:18:21.662 }' 00:18:21.662 [2024-07-14 03:50:40.437700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:21.662 [2024-07-14 03:50:40.437777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387904 ] 00:18:21.662 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.662 [2024-07-14 03:50:40.509403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.662 [2024-07-14 03:50:40.601532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.921 Running I/O for 10 seconds... 00:18:31.938 00:18:31.938 Latency(us) 00:18:31.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.938 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:31.938 Verification LBA range: start 0x0 length 0x1000 00:18:31.938 Nvme1n1 : 10.01 8841.59 69.07 0.00 0.00 14442.64 1674.81 22913.33 00:18:31.938 =================================================================================================================== 00:18:31.938 Total : 8841.59 69.07 0.00 0.00 14442.64 1674.81 22913.33 00:18:32.198 03:50:51 -- target/zcopy.sh@39 -- # perfpid=2389258 00:18:32.198 03:50:51 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:32.198 03:50:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.198 03:50:51 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:32.198 03:50:51 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:32.198 03:50:51 -- nvmf/common.sh@520 -- # config=() 00:18:32.198 03:50:51 -- nvmf/common.sh@520 -- # local subsystem config 00:18:32.198 03:50:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:32.198 03:50:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:32.198 { 00:18:32.198 "params": { 00:18:32.198 "name": "Nvme$subsystem", 00:18:32.198 "trtype": "$TEST_TRANSPORT", 00:18:32.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:32.198 "adrfam": "ipv4", 00:18:32.198 "trsvcid": "$NVMF_PORT", 00:18:32.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:32.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:32.198 "hdgst": ${hdgst:-false}, 00:18:32.198 "ddgst": ${ddgst:-false} 00:18:32.198 }, 00:18:32.198 "method": "bdev_nvme_attach_controller" 00:18:32.198 } 00:18:32.198 EOF 00:18:32.198 )") 00:18:32.198 03:50:51 -- nvmf/common.sh@542 -- # cat 00:18:32.198 [2024-07-14 03:50:51.061099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.198 [2024-07-14 03:50:51.061145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.198 03:50:51 -- nvmf/common.sh@544 -- # jq . 00:18:32.198 03:50:51 -- nvmf/common.sh@545 -- # IFS=, 00:18:32.198 03:50:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:32.198 "params": { 00:18:32.198 "name": "Nvme1", 00:18:32.198 "trtype": "tcp", 00:18:32.198 "traddr": "10.0.0.2", 00:18:32.198 "adrfam": "ipv4", 00:18:32.198 "trsvcid": "4420", 00:18:32.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.198 "hdgst": false, 00:18:32.198 "ddgst": false 00:18:32.198 }, 00:18:32.198 "method": "bdev_nvme_attach_controller" 00:18:32.198 }' 00:18:32.198 [2024-07-14 03:50:51.069060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.198 [2024-07-14 03:50:51.069088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.198 [2024-07-14 03:50:51.077080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.198 [2024-07-14 03:50:51.077106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.198 [2024-07-14 03:50:51.085096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.198 [2024-07-14 03:50:51.085119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.198 [2024-07-14 03:50:51.093119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.198 [2024-07-14 03:50:51.093158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.198 [2024-07-14 03:50:51.096484] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:32.198 [2024-07-14 03:50:51.096542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389258 ] 00:18:32.198 [2024-07-14 03:50:51.101150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.199 [2024-07-14 03:50:51.101172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.199 [2024-07-14 03:50:51.109168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.199 [2024-07-14 03:50:51.109189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.199 [2024-07-14 03:50:51.117196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.199 [2024-07-14 03:50:51.117216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.199 [2024-07-14 03:50:51.125208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.199 [2024-07-14 03:50:51.125229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.199 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.199 [2024-07-14 03:50:51.133233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.199 [2024-07-14 03:50:51.133258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.141257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.141283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.149295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.149322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.157315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.157340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.162360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.459 [2024-07-14 03:50:51.165337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.165362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.173391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.173429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.181392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.181420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.189427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.189453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.197426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.197451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.205446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.205471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.213473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.213500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.221527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.221565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.229516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.229542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.237536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.237561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.245558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.245584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.253580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.253605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.255631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.459 [2024-07-14 03:50:51.261600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.261626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.269624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.269650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.277674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.277710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.285696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.285734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.293716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.293755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.301739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.301777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.309763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.309803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.317780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.317820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.325780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.325805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.333825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.333861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.341847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.341892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.349880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.349929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.357875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.357900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.365901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.365942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.373942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.373968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.381974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.381998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.389990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.390013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.459 [2024-07-14 03:50:51.398013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.459 [2024-07-14 03:50:51.398037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.406036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.406059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.414062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.414084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.422082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.422104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.430103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.430125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.438126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.438167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.446224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.446252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.454224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.454250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.462243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.462268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.470266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.470292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.478287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.478313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.486310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.486335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.494339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.494367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.502358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.502383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.510378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.510403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.518402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.518427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.526426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.526452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.534489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.534517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.542492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.542518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.550513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.550538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.558536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.558561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.566560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.566585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.574583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.574608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.582608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.582635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.590638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.590668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.598656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.598685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 Running I/O for 5 seconds... 00:18:32.718 [2024-07-14 03:50:51.606680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.718 [2024-07-14 03:50:51.606705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.718 [2024-07-14 03:50:51.621003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.719 [2024-07-14 03:50:51.621033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.719 [2024-07-14 03:50:51.631422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.719 [2024-07-14 03:50:51.631454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.719 [2024-07-14 03:50:51.642010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.719 [2024-07-14 03:50:51.642039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.719 [2024-07-14 03:50:51.652752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.719 [2024-07-14 03:50:51.652783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.666050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.666079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.675803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.675834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.687467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.687499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.698406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.698437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.709099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.709127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.721921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.721949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.733297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.733329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.742420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.742450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.754010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.754038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.764022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.764050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.775021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.775056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.784696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.784724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.795301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.795328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.807556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.807585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.816413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.816440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.827232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.827261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.836980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.837009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.846261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.846289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.857148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.857176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.867697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.867725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.878361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.878389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.888976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.889004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.899374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.899402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.976 [2024-07-14 03:50:51.911467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.976 [2024-07-14 03:50:51.911497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.920349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.920377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.931060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.931088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.940301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.940328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.951093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.951122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.961353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.961381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.971642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.971677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.983561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.983589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:51.992627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:51.992655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.003418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.003446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.013672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.013701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.025981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.026009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.034756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.034783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.047093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.047120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.058283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.058310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.066993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.067021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.234 [2024-07-14 03:50:52.078159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.234 [2024-07-14 03:50:52.078186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.090203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.090230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.099465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.099492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.110162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.110190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.120182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.120209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.130350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.130378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.140410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.140438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.150340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.150368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.160499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.160527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.235 [2024-07-14 03:50:52.170446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.235 [2024-07-14 03:50:52.170481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.180678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.180706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.190590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.190617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.200794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.200822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.211337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.211365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.221568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.221595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.231994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.232023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.241489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.241517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.252112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.252141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.263256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.263283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.271951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.271978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.282288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.282316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.292715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.292744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.305203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.305235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.313978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.494 [2024-07-14 03:50:52.314006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.494 [2024-07-14 03:50:52.326306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.326334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.335716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.335744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.345643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.345670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.356392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.356420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.366641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.366669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.376570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.376597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.387337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.387365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.397413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.397441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.407742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.407770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.420116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.420144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.495 [2024-07-14 03:50:52.428882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.495 [2024-07-14 03:50:52.428909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.439599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.439628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.449967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.449995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.460182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.460210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.470630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.470657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.481140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.481168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.491701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.491728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.502047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.502075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.514341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.514370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.523841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.523875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.534745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.534773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.544963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.544990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.555209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.555236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.567487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.567515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.576978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.577005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.587791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.587819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.597816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.597844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.607877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.607904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.617565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.617593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.627786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.627814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.639922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.639950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.648433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.648460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.659312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.659340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.668887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.668915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.679223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.679251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.755 [2024-07-14 03:50:52.689703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.755 [2024-07-14 03:50:52.689731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.699780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.699810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.710417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.710445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.722323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.722352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.731298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.731328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.741855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.741890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.754003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.754031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.761943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.761971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.774382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.774411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.785457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.785485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.794113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.794141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.805404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.805432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.817504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.817533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.826946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.826974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.837720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.837748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.848019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.848047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.858620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.858648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.868905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.868933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.878839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.878874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.888990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.889018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.899123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.899152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.909371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.909399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.919468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.919496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.929526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.929555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.939887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.939916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.014 [2024-07-14 03:50:52.950301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.014 [2024-07-14 03:50:52.950347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:52.962650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:52.962679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:52.971729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:52.971757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:52.984003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:52.984031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:52.992852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:52.992890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.005746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.005774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.014655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.014683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.025615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.025643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.035195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.035224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.046026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.046054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.056473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.056501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.066753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.066781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.079168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.079196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.088654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.088682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.098955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.098990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.109528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.109556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.119843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.119878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.130263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.130292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.142649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.142676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.151671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.151705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.163695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.163723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.172830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.172857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.183522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.183550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.194072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.194099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.273 [2024-07-14 03:50:53.206194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.273 [2024-07-14 03:50:53.206222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.214716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.214745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.225341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.225368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.235208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.235235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.245278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.245305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.254981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.255008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.265090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.265118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.274897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.274925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.285370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.285398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.297355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.297382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.306546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.306574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.317177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.317205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.327131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.327159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.337722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.337749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.346861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.346905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.357005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.357032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.368664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.368692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.377623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.377652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.388419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.388448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.398604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.398632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.408682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.408710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.418813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.418840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.429355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.429383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.439779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.439806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.449689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.449716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.459669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.459696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.532 [2024-07-14 03:50:53.470009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.532 [2024-07-14 03:50:53.470037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.482134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.482172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.490828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.490872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.502008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.502036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.514195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.514222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.523429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.523457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.534578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.534605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.544781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.544815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.555085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.555113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.565269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.565296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.575340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.575368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.585812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.585840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.596566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.596593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.606578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.606606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.617040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.617068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.627128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.627155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.637569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.637597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.647564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.647591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.657851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.657887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.667170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.667198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.677708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.677735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.687729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.687756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.697914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.697941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.707758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.707797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.717965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.717993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.792 [2024-07-14 03:50:53.727915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.792 [2024-07-14 03:50:53.727943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.738231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.738266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.750257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.750285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.758672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.758699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.769878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.769908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.782145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.782172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.791353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.791382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.802064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.802092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.814593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.814621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.823589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.823616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.834613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.834641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.845040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.845068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.855229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.855257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.865167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.865195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.875142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.875170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.885522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.885550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.895470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.895497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.905803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.905831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.917877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.917904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.926894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.926922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.937833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.937861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.948230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.948257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.958828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.958857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.971829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.971858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.980713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.980740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.053 [2024-07-14 03:50:53.992072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.053 [2024-07-14 03:50:53.992102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.004402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.004431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.013418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.013446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.024447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.024475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.033812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.033839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.044556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.044584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.054573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.054602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.065013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.065041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.077459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.077488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.088964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.088993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.097711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.097740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.108805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.108835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.119294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.119322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.130011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.130039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.140184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.140212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.150715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.150758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.162772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.162800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.172101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.172129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.182551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.182578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.192437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.192466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.202546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.202573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.212910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.212950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.224924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.224953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.233861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.233897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.314 [2024-07-14 03:50:54.244643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.314 [2024-07-14 03:50:54.244670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.256712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.256741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.265659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.265687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.277916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.277944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.289002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.289029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.297584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.297612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.308571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.308599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.318925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.318953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.329022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.329051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.339208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.339236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.349105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.349133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.358836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.358864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.368994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.369022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.378910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.378938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.389136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.389163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.399127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.399155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.409496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.409524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.419715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.419743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.430338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.430365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.440712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.440740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.450658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.450688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.461277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.461320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.471745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.471773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.483748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.483775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.492441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.492468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.503072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.503100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.574 [2024-07-14 03:50:54.513081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.574 [2024-07-14 03:50:54.513109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.522686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.522715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.533378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.533406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.543652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.543696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.553790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.553817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.564154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.564181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.573999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.574026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.584354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.584383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.596258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.596286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.604878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.604915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.615954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.615982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.628142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.628170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.637279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.637306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.647441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.647469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.657884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.657912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.669990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.670018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.678711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.678739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.689113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.689142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.698964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.698992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.709143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.709171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.719117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.719153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.728818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.728845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.738890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.738917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.749043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.749070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.759017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.759045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.833 [2024-07-14 03:50:54.768938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.833 [2024-07-14 03:50:54.768966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.778756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.778784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.788935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.788963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.801340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.801368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.810581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.810609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.822815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.822842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.831912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.831940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.842628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.842655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.854654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.854681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.863953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.863980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.874632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.874660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.886669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.091 [2024-07-14 03:50:54.886696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.091 [2024-07-14 03:50:54.895474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.895502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.906140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.906167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.916194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.916228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.927141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.927168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.937432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.937460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.947680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.947708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.957860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.957895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.967984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.968011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.978249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.978278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.988207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.988234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:54.997996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:54.998024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:55.008042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:55.008070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:55.018308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:55.018335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.092 [2024-07-14 03:50:55.028453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.092 [2024-07-14 03:50:55.028480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.349 [2024-07-14 03:50:55.038707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.038735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.048984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.049012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.059505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.059534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.071379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.071407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.081071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.081099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.091928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.091955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.101923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.101951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.111475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.111508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.122185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.122213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.132085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.132112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.142117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.142145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.152021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.152049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.162148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.162175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.171572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.171599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.182183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.182210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.191524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.191552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.201437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.201465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.211527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.211554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.222022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.222050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.231938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.231968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.242017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.242045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.252201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.252232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.262040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.262067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.272112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.272140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.350 [2024-07-14 03:50:55.281820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.350 [2024-07-14 03:50:55.281848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.290484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.290514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.301229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.301275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.313134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.313162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.322049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.322078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.334418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.334447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.343557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.343586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.354312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.354340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.364515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.364542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.373726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.373753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.384925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.384953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.395322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.395350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.405684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.405712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.415983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.416010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.426972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.427001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.609 [2024-07-14 03:50:55.439372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.609 [2024-07-14 03:50:55.439400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.450770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.450798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.459484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.459513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.470668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.470696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.480733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.480776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.491559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.491588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.503250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.503278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.511920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.511948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.523332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.523360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.535643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.535671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.610 [2024-07-14 03:50:55.545243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.610 [2024-07-14 03:50:55.545272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.556059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.556089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.566263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.566293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.576431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.576459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.586584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.586612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.596758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.596786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.607140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.607168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.619558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.619587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.628801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.628830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.639389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.639417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.653471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.653501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.662739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.662767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.673073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.673101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.685454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.685497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.694921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.694949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.705295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.705323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.714657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.714685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.725655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.725683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.735804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.735832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.745610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.745639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.756067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.756094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.768043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.768070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.776980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.777008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.787402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.787431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.797377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.797405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.869 [2024-07-14 03:50:55.807558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.869 [2024-07-14 03:50:55.807586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.819633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.819662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.828565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.828592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.839564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.839592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.849766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.849794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.859908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.859935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.869838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.869874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.880202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.880230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.890076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.890105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.900214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.900242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.909935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.909964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.128 [2024-07-14 03:50:55.920637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.128 [2024-07-14 03:50:55.920665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:55.931166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:55.931194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:55.941412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:55.941441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:55.951859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:55.951898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:55.963691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:55.963720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:55.973043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:55.973072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:55.983975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:55.984003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:55.994305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:55.994333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:56.004374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:56.004403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:56.014836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:56.014871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:56.024256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:56.024284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:56.034627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:56.034655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:56.045312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:56.045339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.129 [2024-07-14 03:50:56.056102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.129 [2024-07-14 03:50:56.056130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.068427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.068457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.079841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.079880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.088629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.088658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.099406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.099434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.109697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.109724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.120004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.120032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.132304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.132332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.141521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.141550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.151749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.151777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.162129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.387 [2024-07-14 03:50:56.162157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.387 [2024-07-14 03:50:56.174226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.174254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.183302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.183347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.194137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.194165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.206115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.206144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.214840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.214874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.225618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.225647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.235638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.235665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.246132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.246160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.258320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.258348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.267516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.267544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.280404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.280432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.289803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.289839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.301044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.301072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.311250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.311277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.388 [2024-07-14 03:50:56.320762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.388 [2024-07-14 03:50:56.320789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.646 [2024-07-14 03:50:56.331446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.646 [2024-07-14 03:50:56.331475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.646 [2024-07-14 03:50:56.341010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.646 [2024-07-14 03:50:56.341037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.646 [2024-07-14 03:50:56.352010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.646 [2024-07-14 03:50:56.352038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.646 [2024-07-14 03:50:56.361823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.646 [2024-07-14 03:50:56.361852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.646 [2024-07-14 03:50:56.372693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.372721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.382944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.382972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.393060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.393088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.405070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.405098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.414216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.414244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.424734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.424761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.436015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.436043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.444744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.444771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.455581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.455609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.466133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.466160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.478539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.478566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.488094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.488128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.498618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.498646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.507921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.507948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.518667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.518695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.528753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.528781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.538764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.538791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.549150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.549177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.559101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.559128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.569149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.569177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.647 [2024-07-14 03:50:56.578880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.647 [2024-07-14 03:50:56.578907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.588285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.588314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.599113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.599141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.609616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.609644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.619158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.619185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 00:18:37.907 Latency(us) 00:18:37.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.907 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:37.907 Nvme1n1 : 5.01 12456.12 97.31 0.00 0.00 10263.24 4466.16 23884.23 00:18:37.907 =================================================================================================================== 00:18:37.907 Total : 12456.12 97.31 0.00 0.00 10263.24 4466.16 23884.23 00:18:37.907 [2024-07-14 03:50:56.624109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.624136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.632116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.632154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.640140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.640184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.648200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.648249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.656216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.656265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.664238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.664286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.672265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.672313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.680297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.680346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.688306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.688353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.696328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.696376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.704358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.704409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.712384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.712434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.720397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.720444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.728417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.728464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.736431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.736480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.744446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.744486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.752438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.752464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.760498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.760541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.768521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.768567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.776543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.776591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.784529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.784556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.792584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.792629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.800614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.800661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.808622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.808661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.816616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.816642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 [2024-07-14 03:50:56.824640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.907 [2024-07-14 03:50:56.824666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2389258) - No such process 00:18:37.907 03:50:56 -- target/zcopy.sh@49 -- # wait 2389258 00:18:37.907 03:50:56 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.907 03:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.907 03:50:56 -- common/autotest_common.sh@10 -- # set +x 00:18:37.907 03:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.907 03:50:56 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:37.907 03:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.907 03:50:56 -- common/autotest_common.sh@10 -- # set +x 00:18:37.907 delay0 00:18:37.907 03:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.907 03:50:56 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:37.907 03:50:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.907 03:50:56 -- common/autotest_common.sh@10 -- # set +x 00:18:38.166 03:50:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.166 03:50:56 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:38.166 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.166 [2024-07-14 03:50:56.940340] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:44.737 Initializing NVMe Controllers 00:18:44.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:44.737 Initialization complete. Launching workers. 00:18:44.737 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 156 00:18:44.737 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 443, failed to submit 33 00:18:44.737 success 273, unsuccess 170, failed 0 00:18:44.737 03:51:03 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:44.737 03:51:03 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:44.737 03:51:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:44.737 03:51:03 -- nvmf/common.sh@116 -- # sync 00:18:44.737 03:51:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:44.737 03:51:03 -- nvmf/common.sh@119 -- # set +e 00:18:44.737 03:51:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:44.737 03:51:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:44.737 rmmod nvme_tcp 00:18:44.737 rmmod nvme_fabrics 00:18:44.737 rmmod nvme_keyring 00:18:44.737 03:51:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:44.737 03:51:03 -- nvmf/common.sh@123 -- # set -e 00:18:44.737 03:51:03 -- nvmf/common.sh@124 -- # return 0 00:18:44.737 03:51:03 -- nvmf/common.sh@477 -- # '[' -n 2387772 ']' 00:18:44.737 03:51:03 -- nvmf/common.sh@478 -- # killprocess 2387772 00:18:44.737 03:51:03 -- common/autotest_common.sh@926 -- # '[' -z 2387772 ']' 00:18:44.737 03:51:03 -- common/autotest_common.sh@930 -- # kill -0 2387772 00:18:44.737 03:51:03 -- common/autotest_common.sh@931 -- # uname 00:18:44.737 03:51:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:44.737 03:51:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2387772 00:18:44.737 03:51:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:44.737 03:51:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:44.737 03:51:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2387772' 00:18:44.737 killing process with pid 2387772 00:18:44.737 03:51:03 -- common/autotest_common.sh@945 -- # kill 2387772 00:18:44.737 03:51:03 -- common/autotest_common.sh@950 -- # wait 2387772 00:18:44.737 03:51:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:44.737 03:51:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:44.737 03:51:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:44.737 03:51:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.737 03:51:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:44.737 03:51:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.737 03:51:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.737 03:51:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.639 03:51:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:46.639 00:18:46.639 real 0m28.536s 00:18:46.639 user 0m42.026s 00:18:46.639 sys 0m8.277s 00:18:46.639 03:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.639 03:51:05 -- common/autotest_common.sh@10 -- # set +x 00:18:46.639 ************************************ 00:18:46.639 END TEST nvmf_zcopy 00:18:46.639 ************************************ 00:18:46.898 03:51:05 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:46.898 03:51:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:46.898 03:51:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:46.898 03:51:05 -- common/autotest_common.sh@10 -- # set +x 00:18:46.898 ************************************ 00:18:46.898 START TEST nvmf_nmic 00:18:46.898 ************************************ 00:18:46.898 03:51:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:46.898 * Looking for test storage... 00:18:46.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.898 03:51:05 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.898 03:51:05 -- nvmf/common.sh@7 -- # uname -s 00:18:46.898 03:51:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.898 03:51:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.898 03:51:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.898 03:51:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.898 03:51:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.898 03:51:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.898 03:51:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.898 03:51:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.898 03:51:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.898 03:51:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.898 03:51:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.898 03:51:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.898 03:51:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.898 03:51:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.898 03:51:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.898 03:51:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.898 03:51:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.898 03:51:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.898 03:51:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.898 03:51:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.898 03:51:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.898 03:51:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.898 03:51:05 -- paths/export.sh@5 -- # export PATH 00:18:46.898 03:51:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.898 03:51:05 -- nvmf/common.sh@46 -- # : 0 00:18:46.898 03:51:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:46.898 03:51:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:46.898 03:51:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:46.898 03:51:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.898 03:51:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.898 03:51:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:46.898 03:51:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:46.898 03:51:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:46.898 03:51:05 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.898 03:51:05 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.898 03:51:05 -- target/nmic.sh@14 -- # nvmftestinit 00:18:46.898 03:51:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:46.898 03:51:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.898 03:51:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:46.898 03:51:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:46.898 03:51:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:46.898 03:51:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.898 03:51:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.898 03:51:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.898 03:51:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:46.898 03:51:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:46.898 03:51:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:46.898 03:51:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.803 03:51:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:48.803 03:51:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:48.803 03:51:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:48.803 03:51:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:48.803 03:51:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:48.803 03:51:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:48.803 03:51:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:48.803 03:51:07 -- nvmf/common.sh@294 -- # net_devs=() 00:18:48.803 03:51:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:48.803 03:51:07 -- nvmf/common.sh@295 -- # e810=() 00:18:48.803 03:51:07 -- nvmf/common.sh@295 -- # local -ga e810 00:18:48.803 03:51:07 -- nvmf/common.sh@296 -- # x722=() 00:18:48.803 03:51:07 -- nvmf/common.sh@296 -- # local -ga x722 00:18:48.803 03:51:07 -- nvmf/common.sh@297 -- # mlx=() 00:18:48.803 03:51:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:48.803 03:51:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.803 03:51:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:48.803 03:51:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:48.803 03:51:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:48.803 03:51:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:48.803 03:51:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:48.803 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:48.803 03:51:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:48.803 03:51:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:48.803 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:48.803 03:51:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:48.803 03:51:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:48.803 03:51:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.803 03:51:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:48.803 03:51:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.803 03:51:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:48.803 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:48.803 03:51:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.803 03:51:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:48.803 03:51:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.803 03:51:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:48.803 03:51:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.803 03:51:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:48.803 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:48.803 03:51:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.803 03:51:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:48.803 03:51:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:48.803 03:51:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:48.803 03:51:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.803 03:51:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.803 03:51:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.803 03:51:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:48.803 03:51:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.803 03:51:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.803 03:51:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:48.803 03:51:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.803 03:51:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.803 03:51:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:48.803 03:51:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:48.803 03:51:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.803 03:51:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.803 03:51:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.803 03:51:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.803 03:51:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:48.803 03:51:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.803 03:51:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.803 03:51:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.803 03:51:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:48.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:18:48.803 00:18:48.803 --- 10.0.0.2 ping statistics --- 00:18:48.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.803 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:48.803 03:51:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:18:48.803 00:18:48.803 --- 10.0.0.1 ping statistics --- 00:18:48.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.803 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:48.803 03:51:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.803 03:51:07 -- nvmf/common.sh@410 -- # return 0 00:18:48.803 03:51:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:48.803 03:51:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.803 03:51:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:48.803 03:51:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.803 03:51:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:48.803 03:51:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:48.803 03:51:07 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:48.803 03:51:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:48.803 03:51:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:48.803 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:18:48.803 03:51:07 -- nvmf/common.sh@469 -- # nvmfpid=2392858 00:18:48.803 03:51:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.803 03:51:07 -- nvmf/common.sh@470 -- # waitforlisten 2392858 00:18:48.803 03:51:07 -- common/autotest_common.sh@819 -- # '[' -z 2392858 ']' 00:18:48.803 03:51:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.803 03:51:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:48.803 03:51:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.803 03:51:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:48.803 03:51:07 -- common/autotest_common.sh@10 -- # set +x 00:18:48.803 [2024-07-14 03:51:07.721542] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:48.803 [2024-07-14 03:51:07.721630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.062 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.062 [2024-07-14 03:51:07.793247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.062 [2024-07-14 03:51:07.886021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:49.062 [2024-07-14 03:51:07.886193] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.062 [2024-07-14 03:51:07.886211] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.062 [2024-07-14 03:51:07.886226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.062 [2024-07-14 03:51:07.886305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.062 [2024-07-14 03:51:07.886334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.062 [2024-07-14 03:51:07.886391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.062 [2024-07-14 03:51:07.886393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.027 03:51:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:50.027 03:51:08 -- common/autotest_common.sh@852 -- # return 0 00:18:50.027 03:51:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:50.027 03:51:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 03:51:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.027 03:51:08 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 [2024-07-14 03:51:08.694399] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 Malloc0 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 [2024-07-14 03:51:08.746034] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:50.027 test case1: single bdev can't be used in multiple subsystems 00:18:50.027 03:51:08 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@28 -- # nmic_status=0 00:18:50.027 03:51:08 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 [2024-07-14 03:51:08.769914] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:50.027 [2024-07-14 03:51:08.769945] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:50.027 [2024-07-14 03:51:08.769962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.027 request: 00:18:50.027 { 00:18:50.027 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:50.027 "namespace": { 00:18:50.027 "bdev_name": "Malloc0" 00:18:50.027 }, 00:18:50.027 "method": "nvmf_subsystem_add_ns", 00:18:50.027 "req_id": 1 00:18:50.027 } 00:18:50.027 Got JSON-RPC error response 00:18:50.027 response: 00:18:50.027 { 00:18:50.027 "code": -32602, 00:18:50.027 "message": "Invalid parameters" 00:18:50.027 } 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@29 -- # nmic_status=1 00:18:50.027 03:51:08 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:50.027 03:51:08 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:50.027 Adding namespace failed - expected result. 00:18:50.027 03:51:08 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:50.027 test case2: host connect to nvmf target in multiple paths 00:18:50.027 03:51:08 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:50.027 03:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.027 03:51:08 -- common/autotest_common.sh@10 -- # set +x 00:18:50.027 [2024-07-14 03:51:08.778021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:50.027 03:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.027 03:51:08 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:50.603 03:51:09 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:51.170 03:51:10 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.170 03:51:10 -- common/autotest_common.sh@1177 -- # local i=0 00:18:51.170 03:51:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.170 03:51:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:51.170 03:51:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:53.701 03:51:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:53.701 03:51:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:53.701 03:51:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.701 03:51:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:53.701 03:51:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.701 03:51:12 -- common/autotest_common.sh@1187 -- # return 0 00:18:53.701 03:51:12 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:53.702 [global] 00:18:53.702 thread=1 00:18:53.702 invalidate=1 00:18:53.702 rw=write 00:18:53.702 time_based=1 00:18:53.702 runtime=1 00:18:53.702 ioengine=libaio 00:18:53.702 direct=1 00:18:53.702 bs=4096 00:18:53.702 iodepth=1 00:18:53.702 norandommap=0 00:18:53.702 numjobs=1 00:18:53.702 00:18:53.702 verify_dump=1 00:18:53.702 verify_backlog=512 00:18:53.702 verify_state_save=0 00:18:53.702 do_verify=1 00:18:53.702 verify=crc32c-intel 00:18:53.702 [job0] 00:18:53.702 filename=/dev/nvme0n1 00:18:53.702 Could not set queue depth (nvme0n1) 00:18:53.702 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.702 fio-3.35 00:18:53.702 Starting 1 thread 00:18:54.636 00:18:54.636 job0: (groupid=0, jobs=1): err= 0: pid=2393926: Sun Jul 14 03:51:13 2024 00:18:54.636 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:18:54.636 slat (nsec): min=15236, max=34049, avg=29121.77, stdev=7699.60 00:18:54.636 clat (usec): min=40828, max=41133, avg=40966.80, stdev=58.00 00:18:54.636 lat (usec): min=40862, max=41150, avg=40995.92, stdev=55.23 00:18:54.636 clat percentiles (usec): 00:18:54.636 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:54.636 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:54.636 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:54.636 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:54.636 | 99.99th=[41157] 00:18:54.636 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:18:54.636 slat (nsec): min=8239, max=72451, avg=16426.80, stdev=5421.27 00:18:54.636 clat (usec): min=200, max=564, avg=228.59, stdev=42.98 00:18:54.636 lat (usec): min=209, max=575, avg=245.02, stdev=46.11 00:18:54.636 clat percentiles (usec): 00:18:54.636 | 1.00th=[ 206], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 212], 00:18:54.636 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:18:54.636 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 233], 95.00th=[ 243], 00:18:54.636 | 99.00th=[ 449], 99.50th=[ 490], 99.90th=[ 562], 99.95th=[ 562], 00:18:54.636 | 99.99th=[ 562] 00:18:54.636 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:54.636 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:54.636 lat (usec) : 250=91.20%, 500=4.49%, 750=0.19% 00:18:54.636 lat (msec) : 50=4.12% 00:18:54.636 cpu : usr=0.49%, sys=0.78%, ctx=534, majf=0, minf=2 00:18:54.636 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.636 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.637 00:18:54.637 Run status group 0 (all jobs): 00:18:54.637 READ: bw=85.4KiB/s (87.5kB/s), 85.4KiB/s-85.4KiB/s (87.5kB/s-87.5kB/s), io=88.0KiB (90.1kB), run=1030-1030msec 00:18:54.637 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:18:54.637 00:18:54.637 Disk stats (read/write): 00:18:54.637 nvme0n1: ios=68/512, merge=0/0, ticks=788/114, in_queue=902, util=92.18% 00:18:54.637 03:51:13 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:54.894 03:51:13 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.894 03:51:13 -- common/autotest_common.sh@1198 -- # local i=0 00:18:54.894 03:51:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:54.894 03:51:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.894 03:51:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:54.894 03:51:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.894 03:51:13 -- common/autotest_common.sh@1210 -- # return 0 00:18:54.894 03:51:13 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:54.894 03:51:13 -- target/nmic.sh@53 -- # nvmftestfini 00:18:54.894 03:51:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:54.894 03:51:13 -- nvmf/common.sh@116 -- # sync 00:18:54.894 03:51:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:54.894 03:51:13 -- nvmf/common.sh@119 -- # set +e 00:18:54.894 03:51:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:54.894 03:51:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:54.894 rmmod nvme_tcp 00:18:54.894 rmmod nvme_fabrics 00:18:54.894 rmmod nvme_keyring 00:18:54.894 03:51:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:54.894 03:51:13 -- nvmf/common.sh@123 -- # set -e 00:18:54.894 03:51:13 -- nvmf/common.sh@124 -- # return 0 00:18:54.894 03:51:13 -- nvmf/common.sh@477 -- # '[' -n 2392858 ']' 00:18:54.894 03:51:13 -- nvmf/common.sh@478 -- # killprocess 2392858 00:18:54.894 03:51:13 -- common/autotest_common.sh@926 -- # '[' -z 2392858 ']' 00:18:54.894 03:51:13 -- common/autotest_common.sh@930 -- # kill -0 2392858 00:18:54.894 03:51:13 -- common/autotest_common.sh@931 -- # uname 00:18:54.894 03:51:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:54.894 03:51:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2392858 00:18:54.894 03:51:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:54.894 03:51:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:54.894 03:51:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2392858' 00:18:54.894 killing process with pid 2392858 00:18:54.894 03:51:13 -- common/autotest_common.sh@945 -- # kill 2392858 00:18:54.894 03:51:13 -- common/autotest_common.sh@950 -- # wait 2392858 00:18:55.152 03:51:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:55.152 03:51:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:55.152 03:51:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:55.152 03:51:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.152 03:51:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:55.152 03:51:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.152 03:51:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.152 03:51:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.052 03:51:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:57.052 00:18:57.052 real 0m10.361s 00:18:57.052 user 0m24.967s 00:18:57.052 sys 0m2.270s 00:18:57.052 03:51:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.052 03:51:15 -- common/autotest_common.sh@10 -- # set +x 00:18:57.052 ************************************ 00:18:57.052 END TEST nvmf_nmic 00:18:57.052 ************************************ 00:18:57.052 03:51:15 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:57.052 03:51:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:57.310 03:51:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:57.310 03:51:15 -- common/autotest_common.sh@10 -- # set +x 00:18:57.310 ************************************ 00:18:57.310 START TEST nvmf_fio_target 00:18:57.310 ************************************ 00:18:57.310 03:51:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:57.310 * Looking for test storage... 00:18:57.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.310 03:51:16 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.310 03:51:16 -- nvmf/common.sh@7 -- # uname -s 00:18:57.310 03:51:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.310 03:51:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.310 03:51:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.310 03:51:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.310 03:51:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.310 03:51:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.310 03:51:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.310 03:51:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.310 03:51:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.310 03:51:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.310 03:51:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.310 03:51:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.310 03:51:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.310 03:51:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.310 03:51:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.310 03:51:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.310 03:51:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.310 03:51:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.310 03:51:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.310 03:51:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.310 03:51:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.310 03:51:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.310 03:51:16 -- paths/export.sh@5 -- # export PATH 00:18:57.310 03:51:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.310 03:51:16 -- nvmf/common.sh@46 -- # : 0 00:18:57.310 03:51:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:57.310 03:51:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:57.310 03:51:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:57.310 03:51:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.310 03:51:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.310 03:51:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:57.310 03:51:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:57.310 03:51:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:57.310 03:51:16 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.310 03:51:16 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.310 03:51:16 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.310 03:51:16 -- target/fio.sh@16 -- # nvmftestinit 00:18:57.310 03:51:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:57.310 03:51:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.310 03:51:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:57.310 03:51:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:57.310 03:51:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:57.310 03:51:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.310 03:51:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.310 03:51:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.310 03:51:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:57.311 03:51:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:57.311 03:51:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:57.311 03:51:16 -- common/autotest_common.sh@10 -- # set +x 00:18:59.214 03:51:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:59.214 03:51:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:59.214 03:51:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:59.214 03:51:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:59.214 03:51:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:59.214 03:51:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:59.214 03:51:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:59.214 03:51:18 -- nvmf/common.sh@294 -- # net_devs=() 00:18:59.214 03:51:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:59.214 03:51:18 -- nvmf/common.sh@295 -- # e810=() 00:18:59.214 03:51:18 -- nvmf/common.sh@295 -- # local -ga e810 00:18:59.214 03:51:18 -- nvmf/common.sh@296 -- # x722=() 00:18:59.214 03:51:18 -- nvmf/common.sh@296 -- # local -ga x722 00:18:59.214 03:51:18 -- nvmf/common.sh@297 -- # mlx=() 00:18:59.214 03:51:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:59.214 03:51:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.214 03:51:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:59.214 03:51:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:59.214 03:51:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:59.214 03:51:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.214 03:51:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.214 03:51:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.214 03:51:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.215 03:51:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.215 03:51:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:59.215 03:51:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.215 03:51:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.215 03:51:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.215 03:51:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.215 03:51:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.215 03:51:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.215 03:51:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.215 03:51:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.215 03:51:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.215 03:51:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.215 03:51:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.215 03:51:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.215 03:51:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:59.215 03:51:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:59.215 03:51:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:59.215 03:51:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:59.215 03:51:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.215 03:51:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.215 03:51:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.215 03:51:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:59.215 03:51:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.215 03:51:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.215 03:51:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:59.215 03:51:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.215 03:51:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.215 03:51:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:59.215 03:51:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:59.215 03:51:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.215 03:51:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.215 03:51:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.215 03:51:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.215 03:51:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:59.215 03:51:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.473 03:51:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.473 03:51:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.473 03:51:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:59.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:18:59.473 00:18:59.473 --- 10.0.0.2 ping statistics --- 00:18:59.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.473 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:59.473 03:51:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:18:59.473 00:18:59.473 --- 10.0.0.1 ping statistics --- 00:18:59.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.473 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:18:59.473 03:51:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.473 03:51:18 -- nvmf/common.sh@410 -- # return 0 00:18:59.473 03:51:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:59.473 03:51:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.473 03:51:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:59.473 03:51:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:59.473 03:51:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.473 03:51:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:59.473 03:51:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:59.473 03:51:18 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:59.473 03:51:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:59.473 03:51:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:59.473 03:51:18 -- common/autotest_common.sh@10 -- # set +x 00:18:59.473 03:51:18 -- nvmf/common.sh@469 -- # nvmfpid=2396062 00:18:59.473 03:51:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:59.473 03:51:18 -- nvmf/common.sh@470 -- # waitforlisten 2396062 00:18:59.473 03:51:18 -- common/autotest_common.sh@819 -- # '[' -z 2396062 ']' 00:18:59.473 03:51:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.473 03:51:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.473 03:51:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.473 03:51:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.473 03:51:18 -- common/autotest_common.sh@10 -- # set +x 00:18:59.473 [2024-07-14 03:51:18.265547] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:59.473 [2024-07-14 03:51:18.265634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.473 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.473 [2024-07-14 03:51:18.337127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.733 [2024-07-14 03:51:18.431362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:59.733 [2024-07-14 03:51:18.431529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.733 [2024-07-14 03:51:18.431549] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.733 [2024-07-14 03:51:18.431563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.733 [2024-07-14 03:51:18.431621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.733 [2024-07-14 03:51:18.431654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.733 [2024-07-14 03:51:18.431713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.733 [2024-07-14 03:51:18.431717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.675 03:51:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.675 03:51:19 -- common/autotest_common.sh@852 -- # return 0 00:19:00.675 03:51:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.675 03:51:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:00.675 03:51:19 -- common/autotest_common.sh@10 -- # set +x 00:19:00.675 03:51:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.675 03:51:19 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.675 [2024-07-14 03:51:19.519469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.675 03:51:19 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.933 03:51:19 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:00.933 03:51:19 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.191 03:51:20 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:01.191 03:51:20 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.449 03:51:20 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:01.449 03:51:20 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.707 03:51:20 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:01.707 03:51:20 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:01.966 03:51:20 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.224 03:51:21 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:02.224 03:51:21 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.482 03:51:21 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:02.482 03:51:21 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:02.740 03:51:21 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:02.740 03:51:21 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:02.998 03:51:21 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:03.256 03:51:22 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:03.256 03:51:22 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:03.514 03:51:22 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:03.514 03:51:22 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:03.772 03:51:22 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.030 [2024-07-14 03:51:22.734750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.030 03:51:22 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:04.289 03:51:22 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:04.289 03:51:23 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.256 03:51:23 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:05.256 03:51:23 -- common/autotest_common.sh@1177 -- # local i=0 00:19:05.256 03:51:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.256 03:51:23 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:05.256 03:51:23 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:05.256 03:51:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:07.158 03:51:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:07.158 03:51:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:07.158 03:51:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.158 03:51:25 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:07.158 03:51:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.158 03:51:25 -- common/autotest_common.sh@1187 -- # return 0 00:19:07.158 03:51:25 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:07.158 [global] 00:19:07.158 thread=1 00:19:07.158 invalidate=1 00:19:07.158 rw=write 00:19:07.158 time_based=1 00:19:07.158 runtime=1 00:19:07.158 ioengine=libaio 00:19:07.158 direct=1 00:19:07.158 bs=4096 00:19:07.158 iodepth=1 00:19:07.158 norandommap=0 00:19:07.158 numjobs=1 00:19:07.158 00:19:07.158 verify_dump=1 00:19:07.158 verify_backlog=512 00:19:07.158 verify_state_save=0 00:19:07.158 do_verify=1 00:19:07.158 verify=crc32c-intel 00:19:07.158 [job0] 00:19:07.158 filename=/dev/nvme0n1 00:19:07.158 [job1] 00:19:07.158 filename=/dev/nvme0n2 00:19:07.158 [job2] 00:19:07.158 filename=/dev/nvme0n3 00:19:07.158 [job3] 00:19:07.158 filename=/dev/nvme0n4 00:19:07.158 Could not set queue depth (nvme0n1) 00:19:07.158 Could not set queue depth (nvme0n2) 00:19:07.158 Could not set queue depth (nvme0n3) 00:19:07.158 Could not set queue depth (nvme0n4) 00:19:07.158 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.158 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.158 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.158 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.158 fio-3.35 00:19:07.158 Starting 4 threads 00:19:08.535 00:19:08.535 job0: (groupid=0, jobs=1): err= 0: pid=2397172: Sun Jul 14 03:51:27 2024 00:19:08.535 read: IOPS=986, BW=3946KiB/s (4041kB/s)(4104KiB/1040msec) 00:19:08.535 slat (nsec): min=6262, max=52493, avg=15158.20, stdev=5967.58 00:19:08.535 clat (usec): min=327, max=41146, avg=532.67, stdev=2192.18 00:19:08.535 lat (usec): min=336, max=41154, avg=547.83, stdev=2192.17 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 359], 00:19:08.535 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 424], 60.00th=[ 441], 00:19:08.535 | 70.00th=[ 453], 80.00th=[ 461], 90.00th=[ 474], 95.00th=[ 482], 00:19:08.535 | 99.00th=[ 498], 99.50th=[ 603], 99.90th=[41157], 99.95th=[41157], 00:19:08.535 | 99.99th=[41157] 00:19:08.535 write: IOPS=1476, BW=5908KiB/s (6049kB/s)(6144KiB/1040msec); 0 zone resets 00:19:08.535 slat (nsec): min=7523, max=73177, avg=18931.74, stdev=7888.36 00:19:08.535 clat (usec): min=196, max=1634, avg=283.37, stdev=80.85 00:19:08.535 lat (usec): min=205, max=1645, avg=302.30, stdev=81.52 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 239], 00:19:08.535 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:19:08.535 | 70.00th=[ 273], 80.00th=[ 326], 90.00th=[ 408], 95.00th=[ 449], 00:19:08.535 | 99.00th=[ 545], 99.50th=[ 586], 99.90th=[ 709], 99.95th=[ 1631], 00:19:08.535 | 99.99th=[ 1631] 00:19:08.535 bw ( KiB/s): min= 4944, max= 7344, per=36.50%, avg=6144.00, stdev=1697.06, samples=2 00:19:08.535 iops : min= 1236, max= 1836, avg=1536.00, stdev=424.26, samples=2 00:19:08.535 lat (usec) : 250=24.55%, 500=74.20%, 750=1.05%, 1000=0.04% 00:19:08.535 lat (msec) : 2=0.04%, 50=0.12% 00:19:08.535 cpu : usr=2.12%, sys=6.83%, ctx=2562, majf=0, minf=2 00:19:08.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.535 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.535 job1: (groupid=0, jobs=1): err= 0: pid=2397173: Sun Jul 14 03:51:27 2024 00:19:08.535 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:08.535 slat (nsec): min=6427, max=56197, avg=19598.99, stdev=11638.06 00:19:08.535 clat (usec): min=313, max=41002, avg=1205.58, stdev=4881.57 00:19:08.535 lat (usec): min=321, max=41017, avg=1225.18, stdev=4883.08 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:19:08.535 | 30.00th=[ 457], 40.00th=[ 490], 50.00th=[ 529], 60.00th=[ 619], 00:19:08.535 | 70.00th=[ 693], 80.00th=[ 857], 90.00th=[ 947], 95.00th=[ 1057], 00:19:08.535 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:08.535 | 99.99th=[41157] 00:19:08.535 write: IOPS=791, BW=3165KiB/s (3241kB/s)(3168KiB/1001msec); 0 zone resets 00:19:08.535 slat (nsec): min=8680, max=72005, avg=24019.43, stdev=14338.13 00:19:08.535 clat (usec): min=199, max=1004, avg=436.50, stdev=185.40 00:19:08.535 lat (usec): min=208, max=1045, avg=460.52, stdev=194.40 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:19:08.535 | 30.00th=[ 223], 40.00th=[ 416], 50.00th=[ 482], 60.00th=[ 519], 00:19:08.535 | 70.00th=[ 553], 80.00th=[ 603], 90.00th=[ 668], 95.00th=[ 709], 00:19:08.535 | 99.00th=[ 840], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1004], 00:19:08.535 | 99.99th=[ 1004] 00:19:08.535 bw ( KiB/s): min= 4096, max= 4096, per=24.34%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.535 lat (usec) : 250=21.09%, 500=28.99%, 750=37.81%, 1000=9.59% 00:19:08.535 lat (msec) : 2=1.92%, 50=0.61% 00:19:08.535 cpu : usr=1.40%, sys=3.50%, ctx=1304, majf=0, minf=1 00:19:08.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.535 issued rwts: total=512,792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.535 job2: (groupid=0, jobs=1): err= 0: pid=2397174: Sun Jul 14 03:51:27 2024 00:19:08.535 read: IOPS=761, BW=3046KiB/s (3119kB/s)(3168KiB/1040msec) 00:19:08.535 slat (nsec): min=6523, max=53193, avg=14119.68, stdev=6480.55 00:19:08.535 clat (usec): min=339, max=42128, avg=892.60, stdev=4088.63 00:19:08.535 lat (usec): min=350, max=42145, avg=906.72, stdev=4089.35 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 445], 00:19:08.535 | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 486], 00:19:08.535 | 70.00th=[ 494], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 537], 00:19:08.535 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:08.535 | 99.99th=[42206] 00:19:08.535 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:19:08.535 slat (nsec): min=7992, max=57521, avg=21962.19, stdev=7229.10 00:19:08.535 clat (usec): min=217, max=3389, avg=282.66, stdev=104.31 00:19:08.535 lat (usec): min=227, max=3412, avg=304.62, stdev=105.42 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:19:08.535 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:19:08.535 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 347], 00:19:08.535 | 99.00th=[ 408], 99.50th=[ 445], 99.90th=[ 709], 99.95th=[ 3392], 00:19:08.535 | 99.99th=[ 3392] 00:19:08.535 bw ( KiB/s): min= 1752, max= 6440, per=24.34%, avg=4096.00, stdev=3314.92, samples=2 00:19:08.535 iops : min= 438, max= 1610, avg=1024.00, stdev=828.73, samples=2 00:19:08.535 lat (usec) : 250=11.29%, 500=80.23%, 750=7.71%, 1000=0.22% 00:19:08.535 lat (msec) : 4=0.06%, 20=0.06%, 50=0.44% 00:19:08.535 cpu : usr=3.85%, sys=2.79%, ctx=1816, majf=0, minf=1 00:19:08.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.535 issued rwts: total=792,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.535 job3: (groupid=0, jobs=1): err= 0: pid=2397175: Sun Jul 14 03:51:27 2024 00:19:08.535 read: IOPS=848, BW=3393KiB/s (3474kB/s)(3396KiB/1001msec) 00:19:08.535 slat (nsec): min=6624, max=78005, avg=24597.92, stdev=10869.03 00:19:08.535 clat (usec): min=320, max=40995, avg=640.40, stdev=1976.57 00:19:08.535 lat (usec): min=328, max=41007, avg=665.00, stdev=1976.20 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 371], 00:19:08.535 | 30.00th=[ 392], 40.00th=[ 412], 50.00th=[ 441], 60.00th=[ 474], 00:19:08.535 | 70.00th=[ 553], 80.00th=[ 734], 90.00th=[ 906], 95.00th=[ 1004], 00:19:08.535 | 99.00th=[ 1303], 99.50th=[ 1696], 99.90th=[41157], 99.95th=[41157], 00:19:08.535 | 99.99th=[41157] 00:19:08.535 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:08.535 slat (nsec): min=8166, max=78589, avg=24197.41, stdev=12823.96 00:19:08.535 clat (usec): min=209, max=820, avg=389.12, stdev=168.33 00:19:08.535 lat (usec): min=219, max=861, avg=413.32, stdev=174.95 00:19:08.535 clat percentiles (usec): 00:19:08.535 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:19:08.535 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 338], 60.00th=[ 465], 00:19:08.535 | 70.00th=[ 515], 80.00th=[ 553], 90.00th=[ 635], 95.00th=[ 676], 00:19:08.535 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 824], 00:19:08.535 | 99.99th=[ 824] 00:19:08.535 bw ( KiB/s): min= 4096, max= 4096, per=24.34%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.536 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.536 lat (usec) : 250=23.97%, 500=41.43%, 750=25.31%, 1000=6.99% 00:19:08.536 lat (msec) : 2=2.08%, 4=0.05%, 10=0.05%, 50=0.11% 00:19:08.536 cpu : usr=2.80%, sys=4.40%, ctx=1875, majf=0, minf=1 00:19:08.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.536 issued rwts: total=849,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.536 00:19:08.536 Run status group 0 (all jobs): 00:19:08.536 READ: bw=11.9MiB/s (12.5MB/s), 2046KiB/s-3946KiB/s (2095kB/s-4041kB/s), io=12.4MiB (13.0MB), run=1001-1040msec 00:19:08.536 WRITE: bw=16.4MiB/s (17.2MB/s), 3165KiB/s-5908KiB/s (3241kB/s-6049kB/s), io=17.1MiB (17.9MB), run=1001-1040msec 00:19:08.536 00:19:08.536 Disk stats (read/write): 00:19:08.536 nvme0n1: ios=1074/1130, merge=0/0, ticks=613/270, in_queue=883, util=96.29% 00:19:08.536 nvme0n2: ios=320/512, merge=0/0, ticks=571/284, in_queue=855, util=87.47% 00:19:08.536 nvme0n3: ios=741/1024, merge=0/0, ticks=627/270, in_queue=897, util=96.02% 00:19:08.536 nvme0n4: ios=803/1024, merge=0/0, ticks=1040/388, in_queue=1428, util=98.20% 00:19:08.536 03:51:27 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:08.536 [global] 00:19:08.536 thread=1 00:19:08.536 invalidate=1 00:19:08.536 rw=randwrite 00:19:08.536 time_based=1 00:19:08.536 runtime=1 00:19:08.536 ioengine=libaio 00:19:08.536 direct=1 00:19:08.536 bs=4096 00:19:08.536 iodepth=1 00:19:08.536 norandommap=0 00:19:08.536 numjobs=1 00:19:08.536 00:19:08.536 verify_dump=1 00:19:08.536 verify_backlog=512 00:19:08.536 verify_state_save=0 00:19:08.536 do_verify=1 00:19:08.536 verify=crc32c-intel 00:19:08.536 [job0] 00:19:08.536 filename=/dev/nvme0n1 00:19:08.536 [job1] 00:19:08.536 filename=/dev/nvme0n2 00:19:08.536 [job2] 00:19:08.536 filename=/dev/nvme0n3 00:19:08.536 [job3] 00:19:08.536 filename=/dev/nvme0n4 00:19:08.536 Could not set queue depth (nvme0n1) 00:19:08.536 Could not set queue depth (nvme0n2) 00:19:08.536 Could not set queue depth (nvme0n3) 00:19:08.536 Could not set queue depth (nvme0n4) 00:19:08.795 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.795 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.795 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.795 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.795 fio-3.35 00:19:08.795 Starting 4 threads 00:19:10.173 00:19:10.173 job0: (groupid=0, jobs=1): err= 0: pid=2397409: Sun Jul 14 03:51:28 2024 00:19:10.173 read: IOPS=1075, BW=4304KiB/s (4407kB/s)(4308KiB/1001msec) 00:19:10.173 slat (nsec): min=6249, max=71172, avg=25238.25, stdev=11088.69 00:19:10.173 clat (usec): min=414, max=1018, avg=497.55, stdev=39.65 00:19:10.173 lat (usec): min=435, max=1053, avg=522.78, stdev=42.38 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[ 433], 5.00th=[ 445], 10.00th=[ 457], 20.00th=[ 469], 00:19:10.173 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 498], 60.00th=[ 506], 00:19:10.173 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 553], 00:19:10.173 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 881], 99.95th=[ 1020], 00:19:10.173 | 99.99th=[ 1020] 00:19:10.173 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:10.173 slat (nsec): min=6212, max=79944, avg=16623.28, stdev=10444.82 00:19:10.173 clat (usec): min=196, max=557, avg=258.31, stdev=62.13 00:19:10.173 lat (usec): min=202, max=590, avg=274.93, stdev=65.44 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:19:10.173 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 249], 00:19:10.173 | 70.00th=[ 269], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 400], 00:19:10.173 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 519], 99.95th=[ 562], 00:19:10.173 | 99.99th=[ 562] 00:19:10.173 bw ( KiB/s): min= 5856, max= 5856, per=49.51%, avg=5856.00, stdev= 0.00, samples=1 00:19:10.173 iops : min= 1464, max= 1464, avg=1464.00, stdev= 0.00, samples=1 00:19:10.173 lat (usec) : 250=35.63%, 500=44.55%, 750=19.67%, 1000=0.11% 00:19:10.173 lat (msec) : 2=0.04% 00:19:10.173 cpu : usr=3.10%, sys=5.30%, ctx=2614, majf=0, minf=1 00:19:10.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 issued rwts: total=1077,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.173 job1: (groupid=0, jobs=1): err= 0: pid=2397410: Sun Jul 14 03:51:28 2024 00:19:10.173 read: IOPS=114, BW=459KiB/s (470kB/s)(476KiB/1037msec) 00:19:10.173 slat (nsec): min=5832, max=47151, avg=15089.24, stdev=8788.03 00:19:10.173 clat (usec): min=315, max=42368, avg=7313.89, stdev=15439.07 00:19:10.173 lat (usec): min=327, max=42403, avg=7328.98, stdev=15440.87 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 347], 00:19:10.173 | 30.00th=[ 363], 40.00th=[ 383], 50.00th=[ 420], 60.00th=[ 429], 00:19:10.173 | 70.00th=[ 490], 80.00th=[ 553], 90.00th=[41157], 95.00th=[42206], 00:19:10.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:10.173 | 99.99th=[42206] 00:19:10.173 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:19:10.173 slat (nsec): min=6642, max=53589, avg=18985.76, stdev=9665.71 00:19:10.173 clat (usec): min=238, max=465, avg=296.83, stdev=36.64 00:19:10.173 lat (usec): min=245, max=481, avg=315.82, stdev=37.55 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 258], 20.00th=[ 269], 00:19:10.173 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:19:10.173 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 375], 00:19:10.173 | 99.00th=[ 404], 99.50th=[ 449], 99.90th=[ 465], 99.95th=[ 465], 00:19:10.173 | 99.99th=[ 465] 00:19:10.173 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:10.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:10.173 lat (usec) : 250=1.27%, 500=93.50%, 750=2.06% 00:19:10.173 lat (msec) : 50=3.17% 00:19:10.173 cpu : usr=0.58%, sys=1.06%, ctx=632, majf=0, minf=1 00:19:10.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 issued rwts: total=119,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.173 job2: (groupid=0, jobs=1): err= 0: pid=2397411: Sun Jul 14 03:51:28 2024 00:19:10.173 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:19:10.173 slat (nsec): min=13047, max=49143, avg=22764.29, stdev=11944.35 00:19:10.173 clat (usec): min=40916, max=41993, avg=41225.36, stdev=419.41 00:19:10.173 lat (usec): min=40944, max=42014, avg=41248.12, stdev=423.92 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:10.173 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:10.173 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:19:10.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:10.173 | 99.99th=[42206] 00:19:10.173 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:19:10.173 slat (nsec): min=7187, max=74275, avg=20654.71, stdev=10644.07 00:19:10.173 clat (usec): min=215, max=943, avg=311.29, stdev=69.43 00:19:10.173 lat (usec): min=224, max=954, avg=331.95, stdev=71.28 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 262], 00:19:10.173 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:19:10.173 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 408], 00:19:10.173 | 99.00th=[ 498], 99.50th=[ 807], 99.90th=[ 947], 99.95th=[ 947], 00:19:10.173 | 99.99th=[ 947] 00:19:10.173 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:10.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:10.173 lat (usec) : 250=11.07%, 500=84.05%, 750=0.38%, 1000=0.56% 00:19:10.173 lat (msec) : 50=3.94% 00:19:10.173 cpu : usr=0.58%, sys=1.25%, ctx=535, majf=0, minf=1 00:19:10.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.173 job3: (groupid=0, jobs=1): err= 0: pid=2397412: Sun Jul 14 03:51:28 2024 00:19:10.173 read: IOPS=54, BW=216KiB/s (221kB/s)(224KiB/1037msec) 00:19:10.173 slat (nsec): min=8395, max=42171, avg=22082.79, stdev=8593.26 00:19:10.173 clat (usec): min=446, max=42142, avg=15204.79, stdev=19843.02 00:19:10.173 lat (usec): min=464, max=42175, avg=15226.88, stdev=19844.45 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[ 449], 5.00th=[ 457], 10.00th=[ 482], 20.00th=[ 498], 00:19:10.173 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 816], 00:19:10.173 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:10.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:10.173 | 99.99th=[42206] 00:19:10.173 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:19:10.173 slat (nsec): min=7939, max=71410, avg=21900.91, stdev=11455.02 00:19:10.173 clat (usec): min=229, max=681, avg=331.46, stdev=55.41 00:19:10.173 lat (usec): min=251, max=721, avg=353.36, stdev=55.59 00:19:10.173 clat percentiles (usec): 00:19:10.173 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 285], 00:19:10.173 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:19:10.173 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 433], 00:19:10.173 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[ 685], 99.95th=[ 685], 00:19:10.173 | 99.99th=[ 685] 00:19:10.173 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:19:10.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:10.173 lat (usec) : 250=1.23%, 500=90.32%, 750=4.40%, 1000=0.53% 00:19:10.173 lat (msec) : 50=3.52% 00:19:10.173 cpu : usr=0.68%, sys=1.64%, ctx=569, majf=0, minf=2 00:19:10.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.173 issued rwts: total=56,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.173 00:19:10.173 Run status group 0 (all jobs): 00:19:10.173 READ: bw=4901KiB/s (5018kB/s), 80.8KiB/s-4304KiB/s (82.8kB/s-4407kB/s), io=5092KiB (5214kB), run=1001-1039msec 00:19:10.173 WRITE: bw=11.5MiB/s (12.1MB/s), 1971KiB/s-6138KiB/s (2018kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1039msec 00:19:10.173 00:19:10.173 Disk stats (read/write): 00:19:10.173 nvme0n1: ios=1064/1077, merge=0/0, ticks=1143/275, in_queue=1418, util=99.00% 00:19:10.173 nvme0n2: ios=153/512, merge=0/0, ticks=954/150, in_queue=1104, util=99.19% 00:19:10.174 nvme0n3: ios=66/512, merge=0/0, ticks=1694/147, in_queue=1841, util=99.06% 00:19:10.174 nvme0n4: ios=74/512, merge=0/0, ticks=1587/151, in_queue=1738, util=98.11% 00:19:10.174 03:51:28 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:10.174 [global] 00:19:10.174 thread=1 00:19:10.174 invalidate=1 00:19:10.174 rw=write 00:19:10.174 time_based=1 00:19:10.174 runtime=1 00:19:10.174 ioengine=libaio 00:19:10.174 direct=1 00:19:10.174 bs=4096 00:19:10.174 iodepth=128 00:19:10.174 norandommap=0 00:19:10.174 numjobs=1 00:19:10.174 00:19:10.174 verify_dump=1 00:19:10.174 verify_backlog=512 00:19:10.174 verify_state_save=0 00:19:10.174 do_verify=1 00:19:10.174 verify=crc32c-intel 00:19:10.174 [job0] 00:19:10.174 filename=/dev/nvme0n1 00:19:10.174 [job1] 00:19:10.174 filename=/dev/nvme0n2 00:19:10.174 [job2] 00:19:10.174 filename=/dev/nvme0n3 00:19:10.174 [job3] 00:19:10.174 filename=/dev/nvme0n4 00:19:10.174 Could not set queue depth (nvme0n1) 00:19:10.174 Could not set queue depth (nvme0n2) 00:19:10.174 Could not set queue depth (nvme0n3) 00:19:10.174 Could not set queue depth (nvme0n4) 00:19:10.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.174 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.174 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.174 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.174 fio-3.35 00:19:10.174 Starting 4 threads 00:19:11.554 00:19:11.554 job0: (groupid=0, jobs=1): err= 0: pid=2397642: Sun Jul 14 03:51:30 2024 00:19:11.554 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:19:11.554 slat (usec): min=2, max=16521, avg=136.41, stdev=934.29 00:19:11.554 clat (usec): min=3148, max=93064, avg=19208.74, stdev=12388.44 00:19:11.554 lat (usec): min=4355, max=93071, avg=19345.15, stdev=12443.29 00:19:11.554 clat percentiles (usec): 00:19:11.554 | 1.00th=[ 4490], 5.00th=[ 6849], 10.00th=[ 8717], 20.00th=[ 9634], 00:19:11.554 | 30.00th=[11207], 40.00th=[12125], 50.00th=[15401], 60.00th=[18482], 00:19:11.554 | 70.00th=[22414], 80.00th=[28181], 90.00th=[32900], 95.00th=[40109], 00:19:11.554 | 99.00th=[77071], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:19:11.554 | 99.99th=[92799] 00:19:11.554 write: IOPS=3899, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1002msec); 0 zone resets 00:19:11.554 slat (usec): min=3, max=30558, avg=113.62, stdev=888.33 00:19:11.554 clat (usec): min=324, max=79388, avg=14828.60, stdev=9622.99 00:19:11.554 lat (usec): min=694, max=79429, avg=14942.22, stdev=9709.63 00:19:11.554 clat percentiles (usec): 00:19:11.554 | 1.00th=[ 906], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 9241], 00:19:11.554 | 30.00th=[ 9634], 40.00th=[11469], 50.00th=[12780], 60.00th=[13960], 00:19:11.554 | 70.00th=[15664], 80.00th=[18744], 90.00th=[24511], 95.00th=[31589], 00:19:11.554 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:19:11.554 | 99.99th=[79168] 00:19:11.554 bw ( KiB/s): min=12288, max=17944, per=25.66%, avg=15116.00, stdev=3999.40, samples=2 00:19:11.554 iops : min= 3072, max= 4486, avg=3779.00, stdev=999.85, samples=2 00:19:11.554 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.88% 00:19:11.554 lat (msec) : 2=0.24%, 4=0.52%, 10=27.63%, 20=43.84%, 50=25.35% 00:19:11.554 lat (msec) : 100=1.46% 00:19:11.554 cpu : usr=4.80%, sys=6.09%, ctx=316, majf=0, minf=1 00:19:11.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:11.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.554 issued rwts: total=3584,3907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.554 job1: (groupid=0, jobs=1): err= 0: pid=2397643: Sun Jul 14 03:51:30 2024 00:19:11.554 read: IOPS=2603, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1005msec) 00:19:11.554 slat (usec): min=2, max=29793, avg=158.93, stdev=1206.63 00:19:11.554 clat (usec): min=2302, max=66367, avg=19857.98, stdev=13277.93 00:19:11.554 lat (usec): min=2318, max=66376, avg=20016.91, stdev=13381.12 00:19:11.554 clat percentiles (usec): 00:19:11.554 | 1.00th=[ 4817], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 8848], 00:19:11.554 | 30.00th=[ 9372], 40.00th=[15401], 50.00th=[17957], 60.00th=[20317], 00:19:11.554 | 70.00th=[23462], 80.00th=[27919], 90.00th=[33817], 95.00th=[54789], 00:19:11.554 | 99.00th=[60556], 99.50th=[60556], 99.90th=[66323], 99.95th=[66323], 00:19:11.554 | 99.99th=[66323] 00:19:11.554 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:19:11.554 slat (usec): min=3, max=39036, avg=178.41, stdev=1081.98 00:19:11.554 clat (usec): min=6032, max=74957, avg=24464.42, stdev=15072.38 00:19:11.554 lat (usec): min=6051, max=76662, avg=24642.83, stdev=15149.96 00:19:11.554 clat percentiles (usec): 00:19:11.554 | 1.00th=[ 6259], 5.00th=[ 7439], 10.00th=[ 9110], 20.00th=[10421], 00:19:11.554 | 30.00th=[13435], 40.00th=[16450], 50.00th=[20055], 60.00th=[25560], 00:19:11.554 | 70.00th=[32637], 80.00th=[37487], 90.00th=[44303], 95.00th=[54264], 00:19:11.554 | 99.00th=[70779], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:19:11.554 | 99.99th=[74974] 00:19:11.554 bw ( KiB/s): min=11512, max=12496, per=20.38%, avg=12004.00, stdev=695.79, samples=2 00:19:11.554 iops : min= 2878, max= 3124, avg=3001.00, stdev=173.95, samples=2 00:19:11.554 lat (msec) : 4=0.09%, 10=23.11%, 20=31.34%, 50=39.13%, 100=6.33% 00:19:11.554 cpu : usr=2.79%, sys=3.98%, ctx=292, majf=0, minf=1 00:19:11.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:11.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.554 issued rwts: total=2617,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.554 job2: (groupid=0, jobs=1): err= 0: pid=2397644: Sun Jul 14 03:51:30 2024 00:19:11.554 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:19:11.554 slat (usec): min=3, max=17223, avg=104.27, stdev=726.25 00:19:11.554 clat (usec): min=5576, max=43285, avg=14164.06, stdev=6586.23 00:19:11.554 lat (usec): min=5904, max=43318, avg=14268.32, stdev=6639.43 00:19:11.554 clat percentiles (usec): 00:19:11.554 | 1.00th=[ 6718], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9503], 00:19:11.554 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11863], 60.00th=[13173], 00:19:11.554 | 70.00th=[14877], 80.00th=[16712], 90.00th=[21627], 95.00th=[31589], 00:19:11.554 | 99.00th=[36439], 99.50th=[36439], 99.90th=[38536], 99.95th=[42206], 00:19:11.554 | 99.99th=[43254] 00:19:11.554 write: IOPS=5253, BW=20.5MiB/s (21.5MB/s)(20.5MiB/1001msec); 0 zone resets 00:19:11.554 slat (usec): min=4, max=16641, avg=76.81, stdev=536.12 00:19:11.554 clat (usec): min=605, max=33328, avg=10356.11, stdev=3853.51 00:19:11.554 lat (usec): min=1504, max=33368, avg=10432.92, stdev=3871.64 00:19:11.554 clat percentiles (usec): 00:19:11.554 | 1.00th=[ 4178], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7570], 00:19:11.554 | 30.00th=[ 8291], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[10814], 00:19:11.554 | 70.00th=[11207], 80.00th=[11731], 90.00th=[14615], 95.00th=[16712], 00:19:11.554 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:19:11.554 | 99.99th=[33424] 00:19:11.554 bw ( KiB/s): min=16800, max=24617, per=35.16%, avg=20708.50, stdev=5527.45, samples=2 00:19:11.554 iops : min= 4200, max= 6154, avg=5177.00, stdev=1381.69, samples=2 00:19:11.554 lat (usec) : 750=0.01% 00:19:11.554 lat (msec) : 2=0.07%, 4=0.34%, 10=36.74%, 20=55.09%, 50=7.76% 00:19:11.554 cpu : usr=7.80%, sys=11.40%, ctx=375, majf=0, minf=1 00:19:11.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:11.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.554 issued rwts: total=5120,5259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.554 job3: (groupid=0, jobs=1): err= 0: pid=2397645: Sun Jul 14 03:51:30 2024 00:19:11.554 read: IOPS=2076, BW=8307KiB/s (8507kB/s)(8324KiB/1002msec) 00:19:11.554 slat (usec): min=3, max=17886, avg=255.25, stdev=1261.78 00:19:11.554 clat (usec): min=1154, max=88841, avg=32184.52, stdev=16575.35 00:19:11.554 lat (msec): min=4, max=104, avg=32.44, stdev=16.68 00:19:11.554 clat percentiles (usec): 00:19:11.554 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[13960], 00:19:11.554 | 30.00th=[23200], 40.00th=[31327], 50.00th=[33162], 60.00th=[35914], 00:19:11.554 | 70.00th=[38011], 80.00th=[43779], 90.00th=[55313], 95.00th=[62653], 00:19:11.554 | 99.00th=[72877], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:19:11.554 | 99.99th=[88605] 00:19:11.554 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:19:11.554 slat (usec): min=4, max=14665, avg=168.88, stdev=833.21 00:19:11.554 clat (usec): min=1445, max=51709, avg=23480.18, stdev=8274.12 00:19:11.554 lat (usec): min=1458, max=51715, avg=23649.06, stdev=8316.59 00:19:11.554 clat percentiles (usec): 00:19:11.555 | 1.00th=[ 6915], 5.00th=[11076], 10.00th=[13698], 20.00th=[16057], 00:19:11.555 | 30.00th=[18220], 40.00th=[20841], 50.00th=[22938], 60.00th=[24773], 00:19:11.555 | 70.00th=[27657], 80.00th=[30016], 90.00th=[34341], 95.00th=[36963], 00:19:11.555 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:19:11.555 | 99.99th=[51643] 00:19:11.555 bw ( KiB/s): min= 9848, max= 9880, per=16.75%, avg=9864.00, stdev=22.63, samples=2 00:19:11.555 iops : min= 2462, max= 2470, avg=2466.00, stdev= 5.66, samples=2 00:19:11.555 lat (msec) : 2=0.24%, 4=0.17%, 10=5.90%, 20=25.77%, 50=61.47% 00:19:11.555 lat (msec) : 100=6.44% 00:19:11.555 cpu : usr=3.50%, sys=6.19%, ctx=287, majf=0, minf=1 00:19:11.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:11.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.555 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.555 00:19:11.555 Run status group 0 (all jobs): 00:19:11.555 READ: bw=52.1MiB/s (54.6MB/s), 8307KiB/s-20.0MiB/s (8507kB/s-20.9MB/s), io=52.4MiB (54.9MB), run=1001-1005msec 00:19:11.555 WRITE: bw=57.5MiB/s (60.3MB/s), 9.98MiB/s-20.5MiB/s (10.5MB/s-21.5MB/s), io=57.8MiB (60.6MB), run=1001-1005msec 00:19:11.555 00:19:11.555 Disk stats (read/write): 00:19:11.555 nvme0n1: ios=3210/3584, merge=0/0, ticks=31132/23663, in_queue=54795, util=87.47% 00:19:11.555 nvme0n2: ios=2067/2215, merge=0/0, ticks=26413/29825, in_queue=56238, util=98.17% 00:19:11.555 nvme0n3: ios=4611/4608, merge=0/0, ticks=52017/42959, in_queue=94976, util=99.69% 00:19:11.555 nvme0n4: ios=1678/2048, merge=0/0, ticks=19885/21470, in_queue=41355, util=98.32% 00:19:11.555 03:51:30 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:11.555 [global] 00:19:11.555 thread=1 00:19:11.555 invalidate=1 00:19:11.555 rw=randwrite 00:19:11.555 time_based=1 00:19:11.555 runtime=1 00:19:11.555 ioengine=libaio 00:19:11.555 direct=1 00:19:11.555 bs=4096 00:19:11.555 iodepth=128 00:19:11.555 norandommap=0 00:19:11.555 numjobs=1 00:19:11.555 00:19:11.555 verify_dump=1 00:19:11.555 verify_backlog=512 00:19:11.555 verify_state_save=0 00:19:11.555 do_verify=1 00:19:11.555 verify=crc32c-intel 00:19:11.555 [job0] 00:19:11.555 filename=/dev/nvme0n1 00:19:11.555 [job1] 00:19:11.555 filename=/dev/nvme0n2 00:19:11.555 [job2] 00:19:11.555 filename=/dev/nvme0n3 00:19:11.555 [job3] 00:19:11.555 filename=/dev/nvme0n4 00:19:11.555 Could not set queue depth (nvme0n1) 00:19:11.555 Could not set queue depth (nvme0n2) 00:19:11.555 Could not set queue depth (nvme0n3) 00:19:11.555 Could not set queue depth (nvme0n4) 00:19:11.555 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.555 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.555 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.555 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:11.555 fio-3.35 00:19:11.555 Starting 4 threads 00:19:12.935 00:19:12.935 job0: (groupid=0, jobs=1): err= 0: pid=2397994: Sun Jul 14 03:51:31 2024 00:19:12.935 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:19:12.935 slat (usec): min=2, max=13365, avg=122.20, stdev=770.28 00:19:12.935 clat (usec): min=2498, max=45328, avg=15520.29, stdev=5894.58 00:19:12.935 lat (usec): min=2501, max=45333, avg=15642.49, stdev=5933.52 00:19:12.935 clat percentiles (usec): 00:19:12.935 | 1.00th=[ 5211], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[11207], 00:19:12.935 | 30.00th=[11731], 40.00th=[12911], 50.00th=[14222], 60.00th=[15139], 00:19:12.935 | 70.00th=[18220], 80.00th=[19792], 90.00th=[21103], 95.00th=[26084], 00:19:12.935 | 99.00th=[36963], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:19:12.935 | 99.99th=[45351] 00:19:12.935 write: IOPS=4238, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1007msec); 0 zone resets 00:19:12.935 slat (usec): min=3, max=8988, avg=108.38, stdev=609.76 00:19:12.935 clat (usec): min=1901, max=45329, avg=14963.84, stdev=7008.22 00:19:12.935 lat (usec): min=3177, max=45334, avg=15072.23, stdev=7038.24 00:19:12.935 clat percentiles (usec): 00:19:12.935 | 1.00th=[ 4490], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[10028], 00:19:12.935 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12518], 60.00th=[13566], 00:19:12.935 | 70.00th=[16188], 80.00th=[20317], 90.00th=[26084], 95.00th=[28705], 00:19:12.935 | 99.00th=[38536], 99.50th=[39060], 99.90th=[43254], 99.95th=[43254], 00:19:12.935 | 99.99th=[45351] 00:19:12.935 bw ( KiB/s): min=16384, max=16816, per=25.56%, avg=16600.00, stdev=305.47, samples=2 00:19:12.935 iops : min= 4096, max= 4204, avg=4150.00, stdev=76.37, samples=2 00:19:12.935 lat (msec) : 2=0.01%, 4=0.14%, 10=13.38%, 20=67.85%, 50=18.62% 00:19:12.935 cpu : usr=2.39%, sys=6.46%, ctx=471, majf=0, minf=1 00:19:12.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:12.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.935 issued rwts: total=4096,4268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.935 job1: (groupid=0, jobs=1): err= 0: pid=2397996: Sun Jul 14 03:51:31 2024 00:19:12.935 read: IOPS=3205, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1007msec) 00:19:12.935 slat (usec): min=2, max=21172, avg=146.08, stdev=943.40 00:19:12.935 clat (usec): min=2013, max=60110, avg=20181.87, stdev=8597.91 00:19:12.935 lat (usec): min=6820, max=60140, avg=20327.95, stdev=8663.49 00:19:12.935 clat percentiles (usec): 00:19:12.935 | 1.00th=[ 7635], 5.00th=[10552], 10.00th=[11994], 20.00th=[13829], 00:19:12.935 | 30.00th=[15008], 40.00th=[16581], 50.00th=[17695], 60.00th=[19792], 00:19:12.935 | 70.00th=[22414], 80.00th=[25035], 90.00th=[31065], 95.00th=[37487], 00:19:12.935 | 99.00th=[49546], 99.50th=[53216], 99.90th=[53216], 99.95th=[58459], 00:19:12.935 | 99.99th=[60031] 00:19:12.935 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:19:12.935 slat (usec): min=3, max=22132, avg=140.44, stdev=952.24 00:19:12.935 clat (usec): min=6251, max=44597, avg=17374.33, stdev=6131.93 00:19:12.935 lat (usec): min=6266, max=51824, avg=17514.77, stdev=6208.63 00:19:12.935 clat percentiles (usec): 00:19:12.935 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[10683], 20.00th=[12125], 00:19:12.935 | 30.00th=[13304], 40.00th=[13960], 50.00th=[15139], 60.00th=[17695], 00:19:12.935 | 70.00th=[20055], 80.00th=[21890], 90.00th=[27919], 95.00th=[29230], 00:19:12.935 | 99.00th=[30540], 99.50th=[34341], 99.90th=[36963], 99.95th=[43254], 00:19:12.935 | 99.99th=[44827] 00:19:12.935 bw ( KiB/s): min=12288, max=16384, per=22.07%, avg=14336.00, stdev=2896.31, samples=2 00:19:12.935 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:12.935 lat (msec) : 4=0.01%, 10=3.83%, 20=61.71%, 50=33.97%, 100=0.47% 00:19:12.935 cpu : usr=2.78%, sys=6.46%, ctx=304, majf=0, minf=1 00:19:12.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:12.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.935 issued rwts: total=3228,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.935 job2: (groupid=0, jobs=1): err= 0: pid=2398003: Sun Jul 14 03:51:31 2024 00:19:12.935 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:19:12.935 slat (usec): min=2, max=18869, avg=142.02, stdev=961.64 00:19:12.935 clat (usec): min=8675, max=57170, avg=18217.75, stdev=5489.54 00:19:12.935 lat (usec): min=8686, max=65312, avg=18359.77, stdev=5565.33 00:19:12.935 clat percentiles (usec): 00:19:12.935 | 1.00th=[ 9896], 5.00th=[11863], 10.00th=[12649], 20.00th=[13960], 00:19:12.935 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16909], 60.00th=[19792], 00:19:12.935 | 70.00th=[20579], 80.00th=[21890], 90.00th=[23987], 95.00th=[26346], 00:19:12.935 | 99.00th=[33817], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:19:12.935 | 99.99th=[57410] 00:19:12.935 write: IOPS=3354, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1007msec); 0 zone resets 00:19:12.935 slat (usec): min=3, max=63256, avg=161.55, stdev=1421.24 00:19:12.935 clat (usec): min=1811, max=109377, avg=20097.72, stdev=14285.68 00:19:12.935 lat (msec): min=7, max=109, avg=20.26, stdev=14.37 00:19:12.935 clat percentiles (msec): 00:19:12.935 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:19:12.935 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18], 00:19:12.935 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 31], 95.00th=[ 36], 00:19:12.935 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 103], 99.95th=[ 110], 00:19:12.935 | 99.99th=[ 110] 00:19:12.935 bw ( KiB/s): min=12288, max=13712, per=20.02%, avg=13000.00, stdev=1006.92, samples=2 00:19:12.935 iops : min= 3072, max= 3428, avg=3250.00, stdev=251.73, samples=2 00:19:12.935 lat (msec) : 2=0.02%, 10=2.28%, 20=65.32%, 50=30.82%, 100=1.32% 00:19:12.935 lat (msec) : 250=0.25% 00:19:12.935 cpu : usr=2.19%, sys=4.57%, ctx=262, majf=0, minf=1 00:19:12.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:12.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.935 issued rwts: total=3072,3378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.935 job3: (groupid=0, jobs=1): err= 0: pid=2398004: Sun Jul 14 03:51:31 2024 00:19:12.935 read: IOPS=4864, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1004msec) 00:19:12.935 slat (usec): min=2, max=6554, avg=94.33, stdev=548.25 00:19:12.935 clat (usec): min=962, max=26195, avg=12406.50, stdev=2716.29 00:19:12.935 lat (usec): min=3277, max=26199, avg=12500.83, stdev=2728.79 00:19:12.935 clat percentiles (usec): 00:19:12.935 | 1.00th=[ 4178], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10814], 00:19:12.935 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:19:12.935 | 70.00th=[12780], 80.00th=[13173], 90.00th=[15270], 95.00th=[16581], 00:19:12.935 | 99.00th=[21365], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:19:12.935 | 99.99th=[26084] 00:19:12.935 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:19:12.935 slat (usec): min=3, max=8422, avg=96.69, stdev=535.09 00:19:12.935 clat (usec): min=4338, max=34498, avg=12961.18, stdev=4447.51 00:19:12.935 lat (usec): min=4355, max=34509, avg=13057.87, stdev=4480.77 00:19:12.935 clat percentiles (usec): 00:19:12.935 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10552], 00:19:12.935 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:19:12.935 | 70.00th=[12780], 80.00th=[13960], 90.00th=[15664], 95.00th=[25560], 00:19:12.935 | 99.00th=[29230], 99.50th=[30540], 99.90th=[33162], 99.95th=[33424], 00:19:12.935 | 99.99th=[34341] 00:19:12.935 bw ( KiB/s): min=19976, max=20984, per=31.53%, avg=20480.00, stdev=712.76, samples=2 00:19:12.935 iops : min= 4994, max= 5246, avg=5120.00, stdev=178.19, samples=2 00:19:12.935 lat (usec) : 1000=0.01% 00:19:12.935 lat (msec) : 4=0.32%, 10=11.84%, 20=83.22%, 50=4.62% 00:19:12.935 cpu : usr=6.48%, sys=7.48%, ctx=488, majf=0, minf=1 00:19:12.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:12.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:12.935 issued rwts: total=4884,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:12.935 00:19:12.935 Run status group 0 (all jobs): 00:19:12.935 READ: bw=59.3MiB/s (62.2MB/s), 11.9MiB/s-19.0MiB/s (12.5MB/s-19.9MB/s), io=59.7MiB (62.6MB), run=1004-1007msec 00:19:12.935 WRITE: bw=63.4MiB/s (66.5MB/s), 13.1MiB/s-19.9MiB/s (13.7MB/s-20.9MB/s), io=63.9MiB (67.0MB), run=1004-1007msec 00:19:12.935 00:19:12.935 Disk stats (read/write): 00:19:12.935 nvme0n1: ios=3542/3584, merge=0/0, ticks=33008/32980, in_queue=65988, util=87.58% 00:19:12.935 nvme0n2: ios=2707/3072, merge=0/0, ticks=22978/23699, in_queue=46677, util=87.09% 00:19:12.935 nvme0n3: ios=2543/2759, merge=0/0, ticks=18770/19703, in_queue=38473, util=88.90% 00:19:12.935 nvme0n4: ios=4096/4249, merge=0/0, ticks=23812/25098, in_queue=48910, util=88.70% 00:19:12.935 03:51:31 -- target/fio.sh@55 -- # sync 00:19:12.935 03:51:31 -- target/fio.sh@59 -- # fio_pid=2398140 00:19:12.935 03:51:31 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:12.935 03:51:31 -- target/fio.sh@61 -- # sleep 3 00:19:12.935 [global] 00:19:12.935 thread=1 00:19:12.935 invalidate=1 00:19:12.935 rw=read 00:19:12.935 time_based=1 00:19:12.935 runtime=10 00:19:12.935 ioengine=libaio 00:19:12.935 direct=1 00:19:12.935 bs=4096 00:19:12.935 iodepth=1 00:19:12.935 norandommap=1 00:19:12.935 numjobs=1 00:19:12.935 00:19:12.935 [job0] 00:19:12.935 filename=/dev/nvme0n1 00:19:12.935 [job1] 00:19:12.935 filename=/dev/nvme0n2 00:19:12.935 [job2] 00:19:12.935 filename=/dev/nvme0n3 00:19:12.935 [job3] 00:19:12.935 filename=/dev/nvme0n4 00:19:12.935 Could not set queue depth (nvme0n1) 00:19:12.935 Could not set queue depth (nvme0n2) 00:19:12.935 Could not set queue depth (nvme0n3) 00:19:12.935 Could not set queue depth (nvme0n4) 00:19:13.194 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.194 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.194 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.194 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.194 fio-3.35 00:19:13.194 Starting 4 threads 00:19:16.477 03:51:34 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:16.477 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=31567872, buflen=4096 00:19:16.477 fio: pid=2398238, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:16.477 03:51:34 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:16.477 03:51:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.477 03:51:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:16.477 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=20434944, buflen=4096 00:19:16.477 fio: pid=2398237, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:16.734 03:51:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.734 03:51:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:16.734 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=13418496, buflen=4096 00:19:16.734 fio: pid=2398233, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:16.991 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5644288, buflen=4096 00:19:16.991 fio: pid=2398234, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:16.991 03:51:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.991 03:51:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:16.991 00:19:16.991 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2398233: Sun Jul 14 03:51:35 2024 00:19:16.991 read: IOPS=948, BW=3792KiB/s (3883kB/s)(12.8MiB/3456msec) 00:19:16.991 slat (usec): min=4, max=10767, avg=25.15, stdev=232.69 00:19:16.991 clat (usec): min=308, max=42520, avg=1017.94, stdev=4800.01 00:19:16.991 lat (usec): min=315, max=49007, avg=1039.81, stdev=4822.01 00:19:16.991 clat percentiles (usec): 00:19:16.991 | 1.00th=[ 322], 5.00th=[ 351], 10.00th=[ 371], 20.00th=[ 388], 00:19:16.991 | 30.00th=[ 408], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 457], 00:19:16.991 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 570], 00:19:16.991 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:16.991 | 99.99th=[42730] 00:19:16.991 bw ( KiB/s): min= 120, max= 7848, per=23.18%, avg=4352.00, stdev=3536.27, samples=6 00:19:16.991 iops : min= 30, max= 1962, avg=1088.00, stdev=884.07, samples=6 00:19:16.991 lat (usec) : 500=81.84%, 750=16.39%, 1000=0.27% 00:19:16.991 lat (msec) : 2=0.06%, 50=1.40% 00:19:16.991 cpu : usr=0.78%, sys=2.11%, ctx=3280, majf=0, minf=1 00:19:16.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.991 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.991 issued rwts: total=3277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.991 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2398234: Sun Jul 14 03:51:35 2024 00:19:16.991 read: IOPS=373, BW=1491KiB/s (1527kB/s)(5512KiB/3697msec) 00:19:16.991 slat (usec): min=5, max=10818, avg=31.46, stdev=290.91 00:19:16.991 clat (usec): min=302, max=44683, avg=2627.84, stdev=9176.31 00:19:16.991 lat (usec): min=315, max=44697, avg=2659.27, stdev=9178.07 00:19:16.991 clat percentiles (usec): 00:19:16.991 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 351], 20.00th=[ 379], 00:19:16.991 | 30.00th=[ 404], 40.00th=[ 424], 50.00th=[ 445], 60.00th=[ 461], 00:19:16.991 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 578], 95.00th=[40633], 00:19:16.991 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[44827], 00:19:16.991 | 99.99th=[44827] 00:19:16.992 bw ( KiB/s): min= 96, max= 6608, per=8.24%, avg=1547.43, stdev=2455.36, samples=7 00:19:16.992 iops : min= 24, max= 1652, avg=386.86, stdev=613.84, samples=7 00:19:16.992 lat (usec) : 500=77.37%, 750=16.82%, 1000=0.36% 00:19:16.992 lat (msec) : 50=5.37% 00:19:16.992 cpu : usr=0.30%, sys=1.08%, ctx=1383, majf=0, minf=1 00:19:16.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.992 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.992 issued rwts: total=1379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.992 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2398237: Sun Jul 14 03:51:35 2024 00:19:16.992 read: IOPS=1561, BW=6244KiB/s (6394kB/s)(19.5MiB/3196msec) 00:19:16.992 slat (usec): min=6, max=9883, avg=12.07, stdev=139.80 00:19:16.992 clat (usec): min=303, max=41606, avg=621.41, stdev=1924.51 00:19:16.992 lat (usec): min=309, max=41619, avg=633.48, stdev=1929.35 00:19:16.992 clat percentiles (usec): 00:19:16.992 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 359], 20.00th=[ 433], 00:19:16.992 | 30.00th=[ 465], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 562], 00:19:16.992 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 701], 00:19:16.992 | 99.00th=[ 766], 99.50th=[ 848], 99.90th=[41157], 99.95th=[41681], 00:19:16.992 | 99.99th=[41681] 00:19:16.992 bw ( KiB/s): min= 5528, max= 7024, per=33.01%, avg=6196.00, stdev=570.23, samples=6 00:19:16.992 iops : min= 1382, max= 1756, avg=1549.00, stdev=142.56, samples=6 00:19:16.992 lat (usec) : 500=40.80%, 750=57.49%, 1000=1.40% 00:19:16.992 lat (msec) : 10=0.04%, 20=0.02%, 50=0.22% 00:19:16.992 cpu : usr=1.28%, sys=2.13%, ctx=4993, majf=0, minf=1 00:19:16.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.992 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.992 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2398238: Sun Jul 14 03:51:35 2024 00:19:16.992 read: IOPS=2663, BW=10.4MiB/s (10.9MB/s)(30.1MiB/2894msec) 00:19:16.992 slat (nsec): min=5310, max=64026, avg=11336.27, stdev=5799.13 00:19:16.992 clat (usec): min=308, max=1189, avg=360.02, stdev=31.95 00:19:16.992 lat (usec): min=314, max=1194, avg=371.36, stdev=34.97 00:19:16.992 clat percentiles (usec): 00:19:16.992 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 338], 00:19:16.992 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:19:16.992 | 70.00th=[ 371], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 396], 00:19:16.992 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 578], 99.95th=[ 603], 00:19:16.992 | 99.99th=[ 1188] 00:19:16.992 bw ( KiB/s): min=10024, max=11376, per=56.73%, avg=10649.60, stdev=619.56, samples=5 00:19:16.992 iops : min= 2506, max= 2844, avg=2662.40, stdev=154.89, samples=5 00:19:16.992 lat (usec) : 500=98.86%, 750=1.12% 00:19:16.992 lat (msec) : 2=0.01% 00:19:16.992 cpu : usr=2.04%, sys=4.67%, ctx=7709, majf=0, minf=1 00:19:16.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.992 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.992 issued rwts: total=7708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.992 00:19:16.992 Run status group 0 (all jobs): 00:19:16.992 READ: bw=18.3MiB/s (19.2MB/s), 1491KiB/s-10.4MiB/s (1527kB/s-10.9MB/s), io=67.8MiB (71.1MB), run=2894-3697msec 00:19:16.992 00:19:16.992 Disk stats (read/write): 00:19:16.992 nvme0n1: ios=3274/0, merge=0/0, ticks=3163/0, in_queue=3163, util=95.74% 00:19:16.992 nvme0n2: ios=1376/0, merge=0/0, ticks=3512/0, in_queue=3512, util=96.28% 00:19:16.992 nvme0n3: ios=4949/0, merge=0/0, ticks=4052/0, in_queue=4052, util=99.06% 00:19:16.992 nvme0n4: ios=7617/0, merge=0/0, ticks=2633/0, in_queue=2633, util=96.75% 00:19:17.250 03:51:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.250 03:51:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:17.508 03:51:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.508 03:51:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:17.769 03:51:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:17.769 03:51:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:18.026 03:51:36 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:18.026 03:51:36 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:18.284 03:51:36 -- target/fio.sh@69 -- # fio_status=0 00:19:18.284 03:51:36 -- target/fio.sh@70 -- # wait 2398140 00:19:18.284 03:51:36 -- target/fio.sh@70 -- # fio_status=4 00:19:18.284 03:51:36 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:18.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:18.284 03:51:37 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:18.284 03:51:37 -- common/autotest_common.sh@1198 -- # local i=0 00:19:18.284 03:51:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:18.284 03:51:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:18.284 03:51:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:18.284 03:51:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:18.284 03:51:37 -- common/autotest_common.sh@1210 -- # return 0 00:19:18.284 03:51:37 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:18.284 03:51:37 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:18.284 nvmf hotplug test: fio failed as expected 00:19:18.284 03:51:37 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.542 03:51:37 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:18.542 03:51:37 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:18.542 03:51:37 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:18.542 03:51:37 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:18.542 03:51:37 -- target/fio.sh@91 -- # nvmftestfini 00:19:18.542 03:51:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:18.542 03:51:37 -- nvmf/common.sh@116 -- # sync 00:19:18.542 03:51:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:18.542 03:51:37 -- nvmf/common.sh@119 -- # set +e 00:19:18.542 03:51:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:18.542 03:51:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:18.542 rmmod nvme_tcp 00:19:18.542 rmmod nvme_fabrics 00:19:18.542 rmmod nvme_keyring 00:19:18.542 03:51:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:18.542 03:51:37 -- nvmf/common.sh@123 -- # set -e 00:19:18.542 03:51:37 -- nvmf/common.sh@124 -- # return 0 00:19:18.542 03:51:37 -- nvmf/common.sh@477 -- # '[' -n 2396062 ']' 00:19:18.542 03:51:37 -- nvmf/common.sh@478 -- # killprocess 2396062 00:19:18.542 03:51:37 -- common/autotest_common.sh@926 -- # '[' -z 2396062 ']' 00:19:18.542 03:51:37 -- common/autotest_common.sh@930 -- # kill -0 2396062 00:19:18.542 03:51:37 -- common/autotest_common.sh@931 -- # uname 00:19:18.542 03:51:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:18.542 03:51:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2396062 00:19:18.542 03:51:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:18.542 03:51:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:18.542 03:51:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2396062' 00:19:18.542 killing process with pid 2396062 00:19:18.542 03:51:37 -- common/autotest_common.sh@945 -- # kill 2396062 00:19:18.542 03:51:37 -- common/autotest_common.sh@950 -- # wait 2396062 00:19:18.801 03:51:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:18.801 03:51:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:18.801 03:51:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:18.801 03:51:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.801 03:51:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:18.801 03:51:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.801 03:51:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.801 03:51:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.340 03:51:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:21.340 00:19:21.340 real 0m23.721s 00:19:21.340 user 1m20.971s 00:19:21.340 sys 0m7.308s 00:19:21.340 03:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.340 03:51:39 -- common/autotest_common.sh@10 -- # set +x 00:19:21.340 ************************************ 00:19:21.340 END TEST nvmf_fio_target 00:19:21.340 ************************************ 00:19:21.340 03:51:39 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:21.340 03:51:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:21.340 03:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:21.340 03:51:39 -- common/autotest_common.sh@10 -- # set +x 00:19:21.340 ************************************ 00:19:21.340 START TEST nvmf_bdevio 00:19:21.340 ************************************ 00:19:21.340 03:51:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:21.340 * Looking for test storage... 00:19:21.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.340 03:51:39 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.340 03:51:39 -- nvmf/common.sh@7 -- # uname -s 00:19:21.340 03:51:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.340 03:51:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.340 03:51:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.340 03:51:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.340 03:51:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.340 03:51:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.340 03:51:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.340 03:51:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.340 03:51:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.340 03:51:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.340 03:51:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.340 03:51:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.340 03:51:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.340 03:51:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.340 03:51:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.340 03:51:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.340 03:51:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.340 03:51:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.340 03:51:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.340 03:51:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.340 03:51:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.340 03:51:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.340 03:51:39 -- paths/export.sh@5 -- # export PATH 00:19:21.340 03:51:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.340 03:51:39 -- nvmf/common.sh@46 -- # : 0 00:19:21.340 03:51:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:21.340 03:51:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:21.340 03:51:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:21.340 03:51:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.340 03:51:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.340 03:51:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:21.340 03:51:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:21.340 03:51:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:21.340 03:51:39 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.340 03:51:39 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.340 03:51:39 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:21.340 03:51:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:21.340 03:51:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.340 03:51:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:21.340 03:51:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:21.340 03:51:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:21.340 03:51:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.340 03:51:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.340 03:51:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.340 03:51:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:21.340 03:51:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:21.340 03:51:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:21.340 03:51:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.273 03:51:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.273 03:51:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:23.273 03:51:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:23.273 03:51:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:23.273 03:51:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:23.273 03:51:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:23.273 03:51:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:23.273 03:51:41 -- nvmf/common.sh@294 -- # net_devs=() 00:19:23.273 03:51:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:23.273 03:51:41 -- nvmf/common.sh@295 -- # e810=() 00:19:23.273 03:51:41 -- nvmf/common.sh@295 -- # local -ga e810 00:19:23.273 03:51:41 -- nvmf/common.sh@296 -- # x722=() 00:19:23.273 03:51:41 -- nvmf/common.sh@296 -- # local -ga x722 00:19:23.273 03:51:41 -- nvmf/common.sh@297 -- # mlx=() 00:19:23.273 03:51:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:23.273 03:51:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.273 03:51:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:23.273 03:51:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:23.273 03:51:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:23.273 03:51:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.273 03:51:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:23.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:23.273 03:51:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.273 03:51:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:23.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:23.273 03:51:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:23.273 03:51:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.273 03:51:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.273 03:51:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.273 03:51:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.273 03:51:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:23.273 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:23.273 03:51:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.273 03:51:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.273 03:51:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.273 03:51:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.273 03:51:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.273 03:51:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:23.273 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:23.273 03:51:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.273 03:51:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:23.273 03:51:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:23.273 03:51:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:23.273 03:51:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.273 03:51:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.273 03:51:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.273 03:51:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:23.273 03:51:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.273 03:51:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.273 03:51:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:23.273 03:51:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.273 03:51:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.273 03:51:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:23.273 03:51:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:23.273 03:51:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.273 03:51:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.273 03:51:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.273 03:51:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.273 03:51:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:23.273 03:51:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.273 03:51:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.273 03:51:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.273 03:51:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:23.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:23.273 00:19:23.273 --- 10.0.0.2 ping statistics --- 00:19:23.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.273 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:23.273 03:51:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:19:23.273 00:19:23.273 --- 10.0.0.1 ping statistics --- 00:19:23.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.273 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:23.273 03:51:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.273 03:51:41 -- nvmf/common.sh@410 -- # return 0 00:19:23.273 03:51:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:23.273 03:51:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.273 03:51:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:23.273 03:51:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:23.274 03:51:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.274 03:51:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:23.274 03:51:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:23.274 03:51:41 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:23.274 03:51:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:23.274 03:51:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:23.274 03:51:41 -- common/autotest_common.sh@10 -- # set +x 00:19:23.274 03:51:41 -- nvmf/common.sh@469 -- # nvmfpid=2400886 00:19:23.274 03:51:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:23.274 03:51:41 -- nvmf/common.sh@470 -- # waitforlisten 2400886 00:19:23.274 03:51:41 -- common/autotest_common.sh@819 -- # '[' -z 2400886 ']' 00:19:23.274 03:51:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.274 03:51:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:23.274 03:51:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.274 03:51:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:23.274 03:51:41 -- common/autotest_common.sh@10 -- # set +x 00:19:23.274 [2024-07-14 03:51:41.948854] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:23.274 [2024-07-14 03:51:41.948944] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.274 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.274 [2024-07-14 03:51:42.025969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.274 [2024-07-14 03:51:42.119427] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.274 [2024-07-14 03:51:42.119591] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.274 [2024-07-14 03:51:42.119611] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.274 [2024-07-14 03:51:42.119626] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.274 [2024-07-14 03:51:42.119710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.274 [2024-07-14 03:51:42.119765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:23.274 [2024-07-14 03:51:42.119818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:23.274 [2024-07-14 03:51:42.119820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.212 03:51:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:24.212 03:51:42 -- common/autotest_common.sh@852 -- # return 0 00:19:24.212 03:51:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.212 03:51:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:24.212 03:51:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.212 03:51:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.212 03:51:42 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.212 03:51:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.212 03:51:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.212 [2024-07-14 03:51:42.965444] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.212 03:51:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.212 03:51:42 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.212 03:51:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.212 03:51:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.212 Malloc0 00:19:24.212 03:51:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.212 03:51:42 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.212 03:51:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.212 03:51:42 -- common/autotest_common.sh@10 -- # set +x 00:19:24.212 03:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.212 03:51:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.212 03:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.212 03:51:43 -- common/autotest_common.sh@10 -- # set +x 00:19:24.212 03:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.212 03:51:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.212 03:51:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.212 03:51:43 -- common/autotest_common.sh@10 -- # set +x 00:19:24.212 [2024-07-14 03:51:43.016647] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.212 03:51:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.212 03:51:43 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:24.212 03:51:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:24.212 03:51:43 -- nvmf/common.sh@520 -- # config=() 00:19:24.212 03:51:43 -- nvmf/common.sh@520 -- # local subsystem config 00:19:24.212 03:51:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:24.212 03:51:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:24.212 { 00:19:24.212 "params": { 00:19:24.212 "name": "Nvme$subsystem", 00:19:24.212 "trtype": "$TEST_TRANSPORT", 00:19:24.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.212 "adrfam": "ipv4", 00:19:24.212 "trsvcid": "$NVMF_PORT", 00:19:24.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.212 "hdgst": ${hdgst:-false}, 00:19:24.212 "ddgst": ${ddgst:-false} 00:19:24.212 }, 00:19:24.212 "method": "bdev_nvme_attach_controller" 00:19:24.212 } 00:19:24.212 EOF 00:19:24.212 )") 00:19:24.212 03:51:43 -- nvmf/common.sh@542 -- # cat 00:19:24.212 03:51:43 -- nvmf/common.sh@544 -- # jq . 00:19:24.212 03:51:43 -- nvmf/common.sh@545 -- # IFS=, 00:19:24.212 03:51:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:24.212 "params": { 00:19:24.212 "name": "Nvme1", 00:19:24.212 "trtype": "tcp", 00:19:24.212 "traddr": "10.0.0.2", 00:19:24.212 "adrfam": "ipv4", 00:19:24.212 "trsvcid": "4420", 00:19:24.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.212 "hdgst": false, 00:19:24.212 "ddgst": false 00:19:24.212 }, 00:19:24.212 "method": "bdev_nvme_attach_controller" 00:19:24.212 }' 00:19:24.212 [2024-07-14 03:51:43.057963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:24.212 [2024-07-14 03:51:43.058039] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401046 ] 00:19:24.212 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.212 [2024-07-14 03:51:43.120243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.472 [2024-07-14 03:51:43.207500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.472 [2024-07-14 03:51:43.207552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.472 [2024-07-14 03:51:43.207554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.732 [2024-07-14 03:51:43.538105] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:24.732 [2024-07-14 03:51:43.538166] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:24.732 I/O targets: 00:19:24.732 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:24.732 00:19:24.732 00:19:24.732 CUnit - A unit testing framework for C - Version 2.1-3 00:19:24.732 http://cunit.sourceforge.net/ 00:19:24.732 00:19:24.732 00:19:24.732 Suite: bdevio tests on: Nvme1n1 00:19:24.732 Test: blockdev write read block ...passed 00:19:24.732 Test: blockdev write zeroes read block ...passed 00:19:24.732 Test: blockdev write zeroes read no split ...passed 00:19:24.989 Test: blockdev write zeroes read split ...passed 00:19:24.989 Test: blockdev write zeroes read split partial ...passed 00:19:24.989 Test: blockdev reset ...[2024-07-14 03:51:43.756781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.989 [2024-07-14 03:51:43.756900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcce00 (9): Bad file descriptor 00:19:24.989 [2024-07-14 03:51:43.769098] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:24.989 passed 00:19:24.989 Test: blockdev write read 8 blocks ...passed 00:19:24.989 Test: blockdev write read size > 128k ...passed 00:19:24.989 Test: blockdev write read invalid size ...passed 00:19:24.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:24.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:24.989 Test: blockdev write read max offset ...passed 00:19:25.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.249 Test: blockdev writev readv 8 blocks ...passed 00:19:25.249 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.249 Test: blockdev writev readv block ...passed 00:19:25.249 Test: blockdev writev readv size > 128k ...passed 00:19:25.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.249 Test: blockdev comparev and writev ...[2024-07-14 03:51:44.067650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.067688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.067712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.067729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.068158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.068182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.068204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.068220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.068638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.068661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.068682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.068698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.069124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.069148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.069168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.249 [2024-07-14 03:51:44.069185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.249 passed 00:19:25.249 Test: blockdev nvme passthru rw ...passed 00:19:25.249 Test: blockdev nvme passthru vendor specific ...[2024-07-14 03:51:44.153270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.249 [2024-07-14 03:51:44.153298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.153503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.249 [2024-07-14 03:51:44.153526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.153729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.249 [2024-07-14 03:51:44.153751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.249 [2024-07-14 03:51:44.153954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.249 [2024-07-14 03:51:44.153978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.249 passed 00:19:25.249 Test: blockdev nvme admin passthru ...passed 00:19:25.508 Test: blockdev copy ...passed 00:19:25.508 00:19:25.508 Run Summary: Type Total Ran Passed Failed Inactive 00:19:25.508 suites 1 1 n/a 0 0 00:19:25.508 tests 23 23 23 0 0 00:19:25.508 asserts 152 152 152 0 n/a 00:19:25.508 00:19:25.508 Elapsed time = 1.340 seconds 00:19:25.508 03:51:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.508 03:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.508 03:51:44 -- common/autotest_common.sh@10 -- # set +x 00:19:25.508 03:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.508 03:51:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:25.508 03:51:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:25.508 03:51:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:25.508 03:51:44 -- nvmf/common.sh@116 -- # sync 00:19:25.508 03:51:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:25.508 03:51:44 -- nvmf/common.sh@119 -- # set +e 00:19:25.508 03:51:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:25.508 03:51:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:25.508 rmmod nvme_tcp 00:19:25.508 rmmod nvme_fabrics 00:19:25.766 rmmod nvme_keyring 00:19:25.766 03:51:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:25.766 03:51:44 -- nvmf/common.sh@123 -- # set -e 00:19:25.766 03:51:44 -- nvmf/common.sh@124 -- # return 0 00:19:25.766 03:51:44 -- nvmf/common.sh@477 -- # '[' -n 2400886 ']' 00:19:25.766 03:51:44 -- nvmf/common.sh@478 -- # killprocess 2400886 00:19:25.766 03:51:44 -- common/autotest_common.sh@926 -- # '[' -z 2400886 ']' 00:19:25.766 03:51:44 -- common/autotest_common.sh@930 -- # kill -0 2400886 00:19:25.766 03:51:44 -- common/autotest_common.sh@931 -- # uname 00:19:25.766 03:51:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:25.766 03:51:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2400886 00:19:25.766 03:51:44 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:25.766 03:51:44 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:25.766 03:51:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2400886' 00:19:25.766 killing process with pid 2400886 00:19:25.766 03:51:44 -- common/autotest_common.sh@945 -- # kill 2400886 00:19:25.766 03:51:44 -- common/autotest_common.sh@950 -- # wait 2400886 00:19:26.027 03:51:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.027 03:51:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:26.027 03:51:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:26.027 03:51:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.027 03:51:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:26.027 03:51:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.027 03:51:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.027 03:51:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.932 03:51:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:27.932 00:19:27.932 real 0m7.021s 00:19:27.932 user 0m13.932s 00:19:27.932 sys 0m2.076s 00:19:27.932 03:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.932 03:51:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.932 ************************************ 00:19:27.932 END TEST nvmf_bdevio 00:19:27.932 ************************************ 00:19:27.932 03:51:46 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:27.932 03:51:46 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:27.932 03:51:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:27.932 03:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:27.932 03:51:46 -- common/autotest_common.sh@10 -- # set +x 00:19:27.932 ************************************ 00:19:27.932 START TEST nvmf_bdevio_no_huge 00:19:27.932 ************************************ 00:19:27.932 03:51:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:27.932 * Looking for test storage... 00:19:27.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.932 03:51:46 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.932 03:51:46 -- nvmf/common.sh@7 -- # uname -s 00:19:27.932 03:51:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.932 03:51:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.932 03:51:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.932 03:51:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.932 03:51:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.932 03:51:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.932 03:51:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.932 03:51:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.932 03:51:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.932 03:51:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.932 03:51:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.932 03:51:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.932 03:51:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.932 03:51:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.932 03:51:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.932 03:51:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.932 03:51:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.932 03:51:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.932 03:51:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.932 03:51:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.932 03:51:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.932 03:51:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.932 03:51:46 -- paths/export.sh@5 -- # export PATH 00:19:27.933 03:51:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.933 03:51:46 -- nvmf/common.sh@46 -- # : 0 00:19:27.933 03:51:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:27.933 03:51:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:27.933 03:51:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:27.933 03:51:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.933 03:51:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.933 03:51:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:27.933 03:51:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:27.933 03:51:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:27.933 03:51:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.933 03:51:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.933 03:51:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:27.933 03:51:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:27.933 03:51:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.933 03:51:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:27.933 03:51:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:27.933 03:51:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:27.933 03:51:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.933 03:51:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.933 03:51:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.933 03:51:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:27.933 03:51:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:27.933 03:51:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:27.933 03:51:46 -- common/autotest_common.sh@10 -- # set +x 00:19:30.465 03:51:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:30.465 03:51:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:30.465 03:51:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:30.465 03:51:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:30.465 03:51:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:30.465 03:51:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:30.465 03:51:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:30.465 03:51:48 -- nvmf/common.sh@294 -- # net_devs=() 00:19:30.465 03:51:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:30.465 03:51:48 -- nvmf/common.sh@295 -- # e810=() 00:19:30.465 03:51:48 -- nvmf/common.sh@295 -- # local -ga e810 00:19:30.465 03:51:48 -- nvmf/common.sh@296 -- # x722=() 00:19:30.465 03:51:48 -- nvmf/common.sh@296 -- # local -ga x722 00:19:30.465 03:51:48 -- nvmf/common.sh@297 -- # mlx=() 00:19:30.465 03:51:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:30.465 03:51:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.465 03:51:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:30.465 03:51:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:30.465 03:51:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:30.465 03:51:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:30.465 03:51:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:30.465 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:30.465 03:51:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:30.465 03:51:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:30.465 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:30.465 03:51:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:30.465 03:51:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:30.465 03:51:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:30.465 03:51:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.465 03:51:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:30.465 03:51:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.465 03:51:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:30.465 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:30.466 03:51:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.466 03:51:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:30.466 03:51:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.466 03:51:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:30.466 03:51:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.466 03:51:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:30.466 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:30.466 03:51:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.466 03:51:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:30.466 03:51:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:30.466 03:51:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:30.466 03:51:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:30.466 03:51:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:30.466 03:51:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.466 03:51:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.466 03:51:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.466 03:51:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:30.466 03:51:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.466 03:51:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.466 03:51:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:30.466 03:51:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.466 03:51:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.466 03:51:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:30.466 03:51:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:30.466 03:51:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.466 03:51:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.466 03:51:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.466 03:51:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.466 03:51:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:30.466 03:51:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.466 03:51:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.466 03:51:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.466 03:51:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:30.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:30.466 00:19:30.466 --- 10.0.0.2 ping statistics --- 00:19:30.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.466 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:30.466 03:51:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:30.466 00:19:30.466 --- 10.0.0.1 ping statistics --- 00:19:30.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.466 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:30.466 03:51:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.466 03:51:49 -- nvmf/common.sh@410 -- # return 0 00:19:30.466 03:51:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:30.466 03:51:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.466 03:51:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:30.466 03:51:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:30.466 03:51:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.466 03:51:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:30.466 03:51:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:30.466 03:51:49 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:30.466 03:51:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:30.466 03:51:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:30.466 03:51:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.466 03:51:49 -- nvmf/common.sh@469 -- # nvmfpid=2403131 00:19:30.466 03:51:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:30.466 03:51:49 -- nvmf/common.sh@470 -- # waitforlisten 2403131 00:19:30.466 03:51:49 -- common/autotest_common.sh@819 -- # '[' -z 2403131 ']' 00:19:30.466 03:51:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.466 03:51:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:30.466 03:51:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.466 03:51:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:30.466 03:51:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.466 [2024-07-14 03:51:49.160631] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:30.466 [2024-07-14 03:51:49.160718] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:30.466 [2024-07-14 03:51:49.227545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.466 [2024-07-14 03:51:49.303063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:30.466 [2024-07-14 03:51:49.303223] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.466 [2024-07-14 03:51:49.303241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.466 [2024-07-14 03:51:49.303259] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.466 [2024-07-14 03:51:49.303348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:30.466 [2024-07-14 03:51:49.303400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:30.466 [2024-07-14 03:51:49.303449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:30.466 [2024-07-14 03:51:49.303452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.400 03:51:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:31.400 03:51:50 -- common/autotest_common.sh@852 -- # return 0 00:19:31.400 03:51:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:31.400 03:51:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:31.400 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.400 03:51:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.400 03:51:50 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.400 03:51:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.400 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.400 [2024-07-14 03:51:50.143918] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.400 03:51:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.400 03:51:50 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.400 03:51:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.400 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.400 Malloc0 00:19:31.400 03:51:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.400 03:51:50 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.400 03:51:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.400 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.400 03:51:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.400 03:51:50 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.400 03:51:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.400 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.400 03:51:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.400 03:51:50 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.400 03:51:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:31.400 03:51:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.400 [2024-07-14 03:51:50.181747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.400 03:51:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:31.400 03:51:50 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:31.400 03:51:50 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:31.400 03:51:50 -- nvmf/common.sh@520 -- # config=() 00:19:31.400 03:51:50 -- nvmf/common.sh@520 -- # local subsystem config 00:19:31.400 03:51:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:31.400 03:51:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:31.400 { 00:19:31.400 "params": { 00:19:31.400 "name": "Nvme$subsystem", 00:19:31.400 "trtype": "$TEST_TRANSPORT", 00:19:31.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.400 "adrfam": "ipv4", 00:19:31.400 "trsvcid": "$NVMF_PORT", 00:19:31.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.400 "hdgst": ${hdgst:-false}, 00:19:31.400 "ddgst": ${ddgst:-false} 00:19:31.400 }, 00:19:31.400 "method": "bdev_nvme_attach_controller" 00:19:31.400 } 00:19:31.400 EOF 00:19:31.400 )") 00:19:31.400 03:51:50 -- nvmf/common.sh@542 -- # cat 00:19:31.400 03:51:50 -- nvmf/common.sh@544 -- # jq . 00:19:31.400 03:51:50 -- nvmf/common.sh@545 -- # IFS=, 00:19:31.400 03:51:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:31.400 "params": { 00:19:31.400 "name": "Nvme1", 00:19:31.400 "trtype": "tcp", 00:19:31.400 "traddr": "10.0.0.2", 00:19:31.400 "adrfam": "ipv4", 00:19:31.400 "trsvcid": "4420", 00:19:31.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.400 "hdgst": false, 00:19:31.400 "ddgst": false 00:19:31.400 }, 00:19:31.400 "method": "bdev_nvme_attach_controller" 00:19:31.400 }' 00:19:31.400 [2024-07-14 03:51:50.226577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:31.400 [2024-07-14 03:51:50.226656] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2403290 ] 00:19:31.400 [2024-07-14 03:51:50.288352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.659 [2024-07-14 03:51:50.373528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.659 [2024-07-14 03:51:50.373577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.659 [2024-07-14 03:51:50.373580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.917 [2024-07-14 03:51:50.649311] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:31.917 [2024-07-14 03:51:50.649372] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:31.917 I/O targets: 00:19:31.917 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:31.917 00:19:31.917 00:19:31.917 CUnit - A unit testing framework for C - Version 2.1-3 00:19:31.917 http://cunit.sourceforge.net/ 00:19:31.917 00:19:31.917 00:19:31.917 Suite: bdevio tests on: Nvme1n1 00:19:31.917 Test: blockdev write read block ...passed 00:19:31.917 Test: blockdev write zeroes read block ...passed 00:19:31.917 Test: blockdev write zeroes read no split ...passed 00:19:31.917 Test: blockdev write zeroes read split ...passed 00:19:32.174 Test: blockdev write zeroes read split partial ...passed 00:19:32.174 Test: blockdev reset ...[2024-07-14 03:51:50.864277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:32.174 [2024-07-14 03:51:50.864382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2092720 (9): Bad file descriptor 00:19:32.174 [2024-07-14 03:51:50.923509] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.174 passed 00:19:32.174 Test: blockdev write read 8 blocks ...passed 00:19:32.174 Test: blockdev write read size > 128k ...passed 00:19:32.174 Test: blockdev write read invalid size ...passed 00:19:32.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.174 Test: blockdev write read max offset ...passed 00:19:32.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.431 Test: blockdev writev readv 8 blocks ...passed 00:19:32.431 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.431 Test: blockdev writev readv block ...passed 00:19:32.431 Test: blockdev writev readv size > 128k ...passed 00:19:32.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.431 Test: blockdev comparev and writev ...[2024-07-14 03:51:51.182619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.182655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.182678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.182695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.183106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.183132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.183157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.183175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.183554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.183579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.183600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.183622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.184001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.184025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.184046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:32.431 [2024-07-14 03:51:51.184061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:32.431 passed 00:19:32.431 Test: blockdev nvme passthru rw ...passed 00:19:32.431 Test: blockdev nvme passthru vendor specific ...[2024-07-14 03:51:51.267256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.431 [2024-07-14 03:51:51.267284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.267490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.431 [2024-07-14 03:51:51.267513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.267711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.431 [2024-07-14 03:51:51.267735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:32.431 [2024-07-14 03:51:51.267931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:32.431 [2024-07-14 03:51:51.267955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:32.431 passed 00:19:32.431 Test: blockdev nvme admin passthru ...passed 00:19:32.431 Test: blockdev copy ...passed 00:19:32.431 00:19:32.431 Run Summary: Type Total Ran Passed Failed Inactive 00:19:32.431 suites 1 1 n/a 0 0 00:19:32.431 tests 23 23 23 0 0 00:19:32.431 asserts 152 152 152 0 n/a 00:19:32.431 00:19:32.431 Elapsed time = 1.354 seconds 00:19:32.996 03:51:51 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.996 03:51:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:32.996 03:51:51 -- common/autotest_common.sh@10 -- # set +x 00:19:32.996 03:51:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:32.996 03:51:51 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:32.996 03:51:51 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:32.996 03:51:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.996 03:51:51 -- nvmf/common.sh@116 -- # sync 00:19:32.996 03:51:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:32.996 03:51:51 -- nvmf/common.sh@119 -- # set +e 00:19:32.996 03:51:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.996 03:51:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:32.996 rmmod nvme_tcp 00:19:32.996 rmmod nvme_fabrics 00:19:32.996 rmmod nvme_keyring 00:19:32.996 03:51:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.996 03:51:51 -- nvmf/common.sh@123 -- # set -e 00:19:32.996 03:51:51 -- nvmf/common.sh@124 -- # return 0 00:19:32.996 03:51:51 -- nvmf/common.sh@477 -- # '[' -n 2403131 ']' 00:19:32.996 03:51:51 -- nvmf/common.sh@478 -- # killprocess 2403131 00:19:32.996 03:51:51 -- common/autotest_common.sh@926 -- # '[' -z 2403131 ']' 00:19:32.996 03:51:51 -- common/autotest_common.sh@930 -- # kill -0 2403131 00:19:32.997 03:51:51 -- common/autotest_common.sh@931 -- # uname 00:19:32.997 03:51:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.997 03:51:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2403131 00:19:32.997 03:51:51 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:32.997 03:51:51 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:32.997 03:51:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2403131' 00:19:32.997 killing process with pid 2403131 00:19:32.997 03:51:51 -- common/autotest_common.sh@945 -- # kill 2403131 00:19:32.997 03:51:51 -- common/autotest_common.sh@950 -- # wait 2403131 00:19:33.255 03:51:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:33.255 03:51:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:33.255 03:51:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:33.255 03:51:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.255 03:51:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:33.255 03:51:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.255 03:51:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.256 03:51:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.790 03:51:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:35.790 00:19:35.790 real 0m7.375s 00:19:35.790 user 0m14.375s 00:19:35.790 sys 0m2.606s 00:19:35.790 03:51:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.790 03:51:54 -- common/autotest_common.sh@10 -- # set +x 00:19:35.790 ************************************ 00:19:35.790 END TEST nvmf_bdevio_no_huge 00:19:35.790 ************************************ 00:19:35.790 03:51:54 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:35.790 03:51:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:35.790 03:51:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:35.790 03:51:54 -- common/autotest_common.sh@10 -- # set +x 00:19:35.790 ************************************ 00:19:35.790 START TEST nvmf_tls 00:19:35.790 ************************************ 00:19:35.790 03:51:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:35.790 * Looking for test storage... 00:19:35.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.790 03:51:54 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.790 03:51:54 -- nvmf/common.sh@7 -- # uname -s 00:19:35.790 03:51:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.790 03:51:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.790 03:51:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.790 03:51:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.790 03:51:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.790 03:51:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.790 03:51:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.790 03:51:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.790 03:51:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.790 03:51:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.790 03:51:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.790 03:51:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.790 03:51:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.790 03:51:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.790 03:51:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.790 03:51:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.790 03:51:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.790 03:51:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.790 03:51:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.790 03:51:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.790 03:51:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.790 03:51:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.790 03:51:54 -- paths/export.sh@5 -- # export PATH 00:19:35.790 03:51:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.790 03:51:54 -- nvmf/common.sh@46 -- # : 0 00:19:35.790 03:51:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:35.790 03:51:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:35.790 03:51:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:35.790 03:51:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.790 03:51:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.790 03:51:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:35.790 03:51:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:35.790 03:51:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:35.790 03:51:54 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.790 03:51:54 -- target/tls.sh@71 -- # nvmftestinit 00:19:35.790 03:51:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:35.790 03:51:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.790 03:51:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.790 03:51:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.790 03:51:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.790 03:51:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.790 03:51:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.790 03:51:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.790 03:51:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:35.790 03:51:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:35.790 03:51:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:35.790 03:51:54 -- common/autotest_common.sh@10 -- # set +x 00:19:37.695 03:51:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.695 03:51:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:37.695 03:51:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:37.695 03:51:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:37.695 03:51:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:37.695 03:51:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:37.695 03:51:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:37.695 03:51:56 -- nvmf/common.sh@294 -- # net_devs=() 00:19:37.695 03:51:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:37.695 03:51:56 -- nvmf/common.sh@295 -- # e810=() 00:19:37.695 03:51:56 -- nvmf/common.sh@295 -- # local -ga e810 00:19:37.695 03:51:56 -- nvmf/common.sh@296 -- # x722=() 00:19:37.695 03:51:56 -- nvmf/common.sh@296 -- # local -ga x722 00:19:37.695 03:51:56 -- nvmf/common.sh@297 -- # mlx=() 00:19:37.695 03:51:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:37.695 03:51:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.695 03:51:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:37.695 03:51:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:37.695 03:51:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:37.695 03:51:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.695 03:51:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:37.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:37.695 03:51:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.695 03:51:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:37.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:37.695 03:51:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:37.695 03:51:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.695 03:51:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.695 03:51:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.695 03:51:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.695 03:51:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:37.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:37.695 03:51:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.695 03:51:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.695 03:51:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.695 03:51:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.695 03:51:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.695 03:51:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:37.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:37.695 03:51:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.695 03:51:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:37.695 03:51:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:37.695 03:51:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:37.695 03:51:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:37.695 03:51:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.695 03:51:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.695 03:51:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.695 03:51:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:37.695 03:51:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.695 03:51:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.695 03:51:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:37.695 03:51:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.695 03:51:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.695 03:51:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:37.695 03:51:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:37.695 03:51:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.695 03:51:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.695 03:51:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.696 03:51:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.696 03:51:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:37.696 03:51:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.696 03:51:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.696 03:51:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.696 03:51:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:37.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:19:37.696 00:19:37.696 --- 10.0.0.2 ping statistics --- 00:19:37.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.696 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:37.696 03:51:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:19:37.696 00:19:37.696 --- 10.0.0.1 ping statistics --- 00:19:37.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.696 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:19:37.696 03:51:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.696 03:51:56 -- nvmf/common.sh@410 -- # return 0 00:19:37.696 03:51:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:37.696 03:51:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.696 03:51:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:37.696 03:51:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:37.696 03:51:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.696 03:51:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:37.696 03:51:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:37.696 03:51:56 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:37.696 03:51:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:37.696 03:51:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:37.696 03:51:56 -- common/autotest_common.sh@10 -- # set +x 00:19:37.696 03:51:56 -- nvmf/common.sh@469 -- # nvmfpid=2405424 00:19:37.696 03:51:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:37.696 03:51:56 -- nvmf/common.sh@470 -- # waitforlisten 2405424 00:19:37.696 03:51:56 -- common/autotest_common.sh@819 -- # '[' -z 2405424 ']' 00:19:37.696 03:51:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.696 03:51:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:37.696 03:51:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.696 03:51:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:37.696 03:51:56 -- common/autotest_common.sh@10 -- # set +x 00:19:37.696 [2024-07-14 03:51:56.466204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:37.696 [2024-07-14 03:51:56.466285] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.696 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.696 [2024-07-14 03:51:56.538475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.696 [2024-07-14 03:51:56.626775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:37.696 [2024-07-14 03:51:56.626960] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.696 [2024-07-14 03:51:56.626983] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.696 [2024-07-14 03:51:56.626997] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.696 [2024-07-14 03:51:56.627027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.954 03:51:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.954 03:51:56 -- common/autotest_common.sh@852 -- # return 0 00:19:37.954 03:51:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:37.954 03:51:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:37.954 03:51:56 -- common/autotest_common.sh@10 -- # set +x 00:19:37.954 03:51:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.954 03:51:56 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:37.954 03:51:56 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:38.212 true 00:19:38.212 03:51:57 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.212 03:51:57 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:38.470 03:51:57 -- target/tls.sh@82 -- # version=0 00:19:38.470 03:51:57 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:38.470 03:51:57 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:38.727 03:51:57 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:38.727 03:51:57 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:38.990 03:51:57 -- target/tls.sh@90 -- # version=13 00:19:38.990 03:51:57 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:38.990 03:51:57 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:39.248 03:51:58 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.248 03:51:58 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:39.507 03:51:58 -- target/tls.sh@98 -- # version=7 00:19:39.507 03:51:58 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:39.507 03:51:58 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:39.507 03:51:58 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.764 03:51:58 -- target/tls.sh@105 -- # ktls=false 00:19:39.764 03:51:58 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:39.764 03:51:58 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:40.021 03:51:58 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.022 03:51:58 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:40.279 03:51:58 -- target/tls.sh@113 -- # ktls=true 00:19:40.279 03:51:58 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:40.279 03:51:58 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:40.279 03:51:59 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:40.279 03:51:59 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.538 03:51:59 -- target/tls.sh@121 -- # ktls=false 00:19:40.538 03:51:59 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:40.538 03:51:59 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:40.538 03:51:59 -- target/tls.sh@49 -- # local key hash crc 00:19:40.538 03:51:59 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:40.538 03:51:59 -- target/tls.sh@51 -- # hash=01 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # gzip -1 -c 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # tail -c8 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # head -c 4 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # crc='p$H�' 00:19:40.538 03:51:59 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:40.538 03:51:59 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:40.538 03:51:59 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.538 03:51:59 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.538 03:51:59 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:40.538 03:51:59 -- target/tls.sh@49 -- # local key hash crc 00:19:40.538 03:51:59 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:40.538 03:51:59 -- target/tls.sh@51 -- # hash=01 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # gzip -1 -c 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # tail -c8 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # head -c 4 00:19:40.538 03:51:59 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:40.538 03:51:59 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:40.538 03:51:59 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:40.538 03:51:59 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.538 03:51:59 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.538 03:51:59 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.538 03:51:59 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:40.538 03:51:59 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:40.538 03:51:59 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:40.538 03:51:59 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:40.538 03:51:59 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:40.538 03:51:59 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:41.106 03:51:59 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:41.364 03:52:00 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:41.364 03:52:00 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:41.364 03:52:00 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.622 [2024-07-14 03:52:00.344103] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.622 03:52:00 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.881 03:52:00 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:41.881 [2024-07-14 03:52:00.805337] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.881 [2024-07-14 03:52:00.805559] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.140 03:52:00 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.140 malloc0 00:19:42.140 03:52:01 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.397 03:52:01 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:42.654 03:52:01 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:42.654 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.861 Initializing NVMe Controllers 00:19:54.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:54.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:54.861 Initialization complete. Launching workers. 00:19:54.861 ======================================================== 00:19:54.861 Latency(us) 00:19:54.861 Device Information : IOPS MiB/s Average min max 00:19:54.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7737.69 30.23 8273.85 1316.74 9603.59 00:19:54.861 ======================================================== 00:19:54.861 Total : 7737.69 30.23 8273.85 1316.74 9603.59 00:19:54.861 00:19:54.861 03:52:11 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:54.861 03:52:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.861 03:52:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.861 03:52:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.861 03:52:11 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:54.861 03:52:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.861 03:52:11 -- target/tls.sh@28 -- # bdevperf_pid=2407335 00:19:54.861 03:52:11 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.861 03:52:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.861 03:52:11 -- target/tls.sh@31 -- # waitforlisten 2407335 /var/tmp/bdevperf.sock 00:19:54.861 03:52:11 -- common/autotest_common.sh@819 -- # '[' -z 2407335 ']' 00:19:54.861 03:52:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.861 03:52:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:54.861 03:52:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.861 03:52:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:54.861 03:52:11 -- common/autotest_common.sh@10 -- # set +x 00:19:54.861 [2024-07-14 03:52:11.679907] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:54.861 [2024-07-14 03:52:11.679985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407335 ] 00:19:54.861 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.861 [2024-07-14 03:52:11.737273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.861 [2024-07-14 03:52:11.818899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.861 03:52:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.861 03:52:12 -- common/autotest_common.sh@852 -- # return 0 00:19:54.861 03:52:12 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:54.861 [2024-07-14 03:52:12.900076] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.862 TLSTESTn1 00:19:54.862 03:52:12 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.862 Running I/O for 10 seconds... 00:20:04.843 00:20:04.843 Latency(us) 00:20:04.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.843 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.843 Verification LBA range: start 0x0 length 0x2000 00:20:04.843 TLSTESTn1 : 10.04 1953.73 7.63 0.00 0.00 65410.18 8495.41 70293.43 00:20:04.843 =================================================================================================================== 00:20:04.843 Total : 1953.73 7.63 0.00 0.00 65410.18 8495.41 70293.43 00:20:04.843 0 00:20:04.843 03:52:23 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.843 03:52:23 -- target/tls.sh@45 -- # killprocess 2407335 00:20:04.843 03:52:23 -- common/autotest_common.sh@926 -- # '[' -z 2407335 ']' 00:20:04.843 03:52:23 -- common/autotest_common.sh@930 -- # kill -0 2407335 00:20:04.843 03:52:23 -- common/autotest_common.sh@931 -- # uname 00:20:04.843 03:52:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:04.843 03:52:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2407335 00:20:04.843 03:52:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:04.843 03:52:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:04.843 03:52:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2407335' 00:20:04.843 killing process with pid 2407335 00:20:04.843 03:52:23 -- common/autotest_common.sh@945 -- # kill 2407335 00:20:04.843 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.843 00:20:04.843 Latency(us) 00:20:04.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.843 =================================================================================================================== 00:20:04.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.843 03:52:23 -- common/autotest_common.sh@950 -- # wait 2407335 00:20:04.843 03:52:23 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:04.843 03:52:23 -- common/autotest_common.sh@640 -- # local es=0 00:20:04.843 03:52:23 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:04.843 03:52:23 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:04.843 03:52:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:04.843 03:52:23 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:04.843 03:52:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:04.843 03:52:23 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:04.843 03:52:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.843 03:52:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.843 03:52:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.843 03:52:23 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:04.843 03:52:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.843 03:52:23 -- target/tls.sh@28 -- # bdevperf_pid=2408702 00:20:04.843 03:52:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.843 03:52:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.843 03:52:23 -- target/tls.sh@31 -- # waitforlisten 2408702 /var/tmp/bdevperf.sock 00:20:04.843 03:52:23 -- common/autotest_common.sh@819 -- # '[' -z 2408702 ']' 00:20:04.843 03:52:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.843 03:52:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.843 03:52:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.843 03:52:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.843 03:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:04.843 [2024-07-14 03:52:23.477751] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:04.843 [2024-07-14 03:52:23.477830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408702 ] 00:20:04.843 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.843 [2024-07-14 03:52:23.539543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.843 [2024-07-14 03:52:23.621605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.778 03:52:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.778 03:52:24 -- common/autotest_common.sh@852 -- # return 0 00:20:05.778 03:52:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:06.037 [2024-07-14 03:52:24.728274] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.037 [2024-07-14 03:52:24.740704] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.037 [2024-07-14 03:52:24.741417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2f7f0 (107): Transport endpoint is not connected 00:20:06.038 [2024-07-14 03:52:24.742406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2f7f0 (9): Bad file descriptor 00:20:06.038 [2024-07-14 03:52:24.743407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.038 [2024-07-14 03:52:24.743426] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.038 [2024-07-14 03:52:24.743455] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.038 request: 00:20:06.038 { 00:20:06.038 "name": "TLSTEST", 00:20:06.038 "trtype": "tcp", 00:20:06.038 "traddr": "10.0.0.2", 00:20:06.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.038 "adrfam": "ipv4", 00:20:06.038 "trsvcid": "4420", 00:20:06.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.038 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:06.038 "method": "bdev_nvme_attach_controller", 00:20:06.038 "req_id": 1 00:20:06.038 } 00:20:06.038 Got JSON-RPC error response 00:20:06.038 response: 00:20:06.038 { 00:20:06.038 "code": -32602, 00:20:06.038 "message": "Invalid parameters" 00:20:06.038 } 00:20:06.038 03:52:24 -- target/tls.sh@36 -- # killprocess 2408702 00:20:06.038 03:52:24 -- common/autotest_common.sh@926 -- # '[' -z 2408702 ']' 00:20:06.038 03:52:24 -- common/autotest_common.sh@930 -- # kill -0 2408702 00:20:06.038 03:52:24 -- common/autotest_common.sh@931 -- # uname 00:20:06.038 03:52:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:06.038 03:52:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2408702 00:20:06.038 03:52:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:06.038 03:52:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:06.038 03:52:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2408702' 00:20:06.038 killing process with pid 2408702 00:20:06.038 03:52:24 -- common/autotest_common.sh@945 -- # kill 2408702 00:20:06.038 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.038 00:20:06.038 Latency(us) 00:20:06.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.038 =================================================================================================================== 00:20:06.038 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.038 03:52:24 -- common/autotest_common.sh@950 -- # wait 2408702 00:20:06.298 03:52:24 -- target/tls.sh@37 -- # return 1 00:20:06.298 03:52:24 -- common/autotest_common.sh@643 -- # es=1 00:20:06.298 03:52:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:06.298 03:52:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:06.298 03:52:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:06.298 03:52:24 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.298 03:52:24 -- common/autotest_common.sh@640 -- # local es=0 00:20:06.298 03:52:24 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.298 03:52:24 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:06.298 03:52:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.298 03:52:24 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:06.298 03:52:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.298 03:52:25 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.298 03:52:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.298 03:52:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.298 03:52:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:06.298 03:52:25 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:06.298 03:52:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.298 03:52:25 -- target/tls.sh@28 -- # bdevperf_pid=2408972 00:20:06.298 03:52:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.298 03:52:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.298 03:52:25 -- target/tls.sh@31 -- # waitforlisten 2408972 /var/tmp/bdevperf.sock 00:20:06.298 03:52:25 -- common/autotest_common.sh@819 -- # '[' -z 2408972 ']' 00:20:06.298 03:52:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.298 03:52:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.298 03:52:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.298 03:52:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.298 03:52:25 -- common/autotest_common.sh@10 -- # set +x 00:20:06.298 [2024-07-14 03:52:25.044158] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:06.298 [2024-07-14 03:52:25.044237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408972 ] 00:20:06.298 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.298 [2024-07-14 03:52:25.102252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.298 [2024-07-14 03:52:25.182113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.239 03:52:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.239 03:52:25 -- common/autotest_common.sh@852 -- # return 0 00:20:07.240 03:52:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.583 [2024-07-14 03:52:26.203307] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.583 [2024-07-14 03:52:26.214427] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:07.583 [2024-07-14 03:52:26.214457] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:07.583 [2024-07-14 03:52:26.214510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:07.583 [2024-07-14 03:52:26.215513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b7f0 (107): Transport endpoint is not connected 00:20:07.583 [2024-07-14 03:52:26.216503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b7f0 (9): Bad file descriptor 00:20:07.583 [2024-07-14 03:52:26.217503] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.583 [2024-07-14 03:52:26.217523] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:07.583 [2024-07-14 03:52:26.217552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.583 request: 00:20:07.583 { 00:20:07.583 "name": "TLSTEST", 00:20:07.583 "trtype": "tcp", 00:20:07.583 "traddr": "10.0.0.2", 00:20:07.583 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:07.583 "adrfam": "ipv4", 00:20:07.583 "trsvcid": "4420", 00:20:07.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.583 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:07.583 "method": "bdev_nvme_attach_controller", 00:20:07.583 "req_id": 1 00:20:07.583 } 00:20:07.583 Got JSON-RPC error response 00:20:07.583 response: 00:20:07.583 { 00:20:07.583 "code": -32602, 00:20:07.583 "message": "Invalid parameters" 00:20:07.583 } 00:20:07.583 03:52:26 -- target/tls.sh@36 -- # killprocess 2408972 00:20:07.583 03:52:26 -- common/autotest_common.sh@926 -- # '[' -z 2408972 ']' 00:20:07.583 03:52:26 -- common/autotest_common.sh@930 -- # kill -0 2408972 00:20:07.583 03:52:26 -- common/autotest_common.sh@931 -- # uname 00:20:07.583 03:52:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:07.583 03:52:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2408972 00:20:07.583 03:52:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:07.583 03:52:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:07.583 03:52:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2408972' 00:20:07.583 killing process with pid 2408972 00:20:07.583 03:52:26 -- common/autotest_common.sh@945 -- # kill 2408972 00:20:07.583 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.583 00:20:07.583 Latency(us) 00:20:07.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.583 =================================================================================================================== 00:20:07.583 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.583 03:52:26 -- common/autotest_common.sh@950 -- # wait 2408972 00:20:07.583 03:52:26 -- target/tls.sh@37 -- # return 1 00:20:07.583 03:52:26 -- common/autotest_common.sh@643 -- # es=1 00:20:07.583 03:52:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.583 03:52:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.583 03:52:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.583 03:52:26 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.583 03:52:26 -- common/autotest_common.sh@640 -- # local es=0 00:20:07.583 03:52:26 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.583 03:52:26 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:07.584 03:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.584 03:52:26 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:07.584 03:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.584 03:52:26 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.584 03:52:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.584 03:52:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:07.584 03:52:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.584 03:52:26 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:07.584 03:52:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.584 03:52:26 -- target/tls.sh@28 -- # bdevperf_pid=2409127 00:20:07.584 03:52:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.584 03:52:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.584 03:52:26 -- target/tls.sh@31 -- # waitforlisten 2409127 /var/tmp/bdevperf.sock 00:20:07.584 03:52:26 -- common/autotest_common.sh@819 -- # '[' -z 2409127 ']' 00:20:07.584 03:52:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.584 03:52:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:07.584 03:52:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.584 03:52:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:07.584 03:52:26 -- common/autotest_common.sh@10 -- # set +x 00:20:07.849 [2024-07-14 03:52:26.520433] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:07.849 [2024-07-14 03:52:26.520522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409127 ] 00:20:07.849 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.849 [2024-07-14 03:52:26.581811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.849 [2024-07-14 03:52:26.670130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.781 03:52:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:08.781 03:52:27 -- common/autotest_common.sh@852 -- # return 0 00:20:08.781 03:52:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:08.781 [2024-07-14 03:52:27.670291] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.781 [2024-07-14 03:52:27.678206] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:08.781 [2024-07-14 03:52:27.678235] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:08.781 [2024-07-14 03:52:27.678290] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:08.781 [2024-07-14 03:52:27.679154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d7f0 (107): Transport endpoint is not connected 00:20:08.781 [2024-07-14 03:52:27.680157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d7f0 (9): Bad file descriptor 00:20:08.781 [2024-07-14 03:52:27.681170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:08.781 [2024-07-14 03:52:27.681189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:08.781 [2024-07-14 03:52:27.681202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:08.781 request: 00:20:08.781 { 00:20:08.781 "name": "TLSTEST", 00:20:08.782 "trtype": "tcp", 00:20:08.782 "traddr": "10.0.0.2", 00:20:08.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.782 "adrfam": "ipv4", 00:20:08.782 "trsvcid": "4420", 00:20:08.782 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.782 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:08.782 "method": "bdev_nvme_attach_controller", 00:20:08.782 "req_id": 1 00:20:08.782 } 00:20:08.782 Got JSON-RPC error response 00:20:08.782 response: 00:20:08.782 { 00:20:08.782 "code": -32602, 00:20:08.782 "message": "Invalid parameters" 00:20:08.782 } 00:20:08.782 03:52:27 -- target/tls.sh@36 -- # killprocess 2409127 00:20:08.782 03:52:27 -- common/autotest_common.sh@926 -- # '[' -z 2409127 ']' 00:20:08.782 03:52:27 -- common/autotest_common.sh@930 -- # kill -0 2409127 00:20:08.782 03:52:27 -- common/autotest_common.sh@931 -- # uname 00:20:08.782 03:52:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:08.782 03:52:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2409127 00:20:09.038 03:52:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:09.038 03:52:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:09.038 03:52:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2409127' 00:20:09.038 killing process with pid 2409127 00:20:09.038 03:52:27 -- common/autotest_common.sh@945 -- # kill 2409127 00:20:09.038 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.038 00:20:09.038 Latency(us) 00:20:09.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.038 =================================================================================================================== 00:20:09.038 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.038 03:52:27 -- common/autotest_common.sh@950 -- # wait 2409127 00:20:09.038 03:52:27 -- target/tls.sh@37 -- # return 1 00:20:09.038 03:52:27 -- common/autotest_common.sh@643 -- # es=1 00:20:09.038 03:52:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:09.038 03:52:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:09.038 03:52:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:09.038 03:52:27 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:09.038 03:52:27 -- common/autotest_common.sh@640 -- # local es=0 00:20:09.038 03:52:27 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:09.038 03:52:27 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:09.038 03:52:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.038 03:52:27 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:09.038 03:52:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.038 03:52:27 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:09.038 03:52:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.038 03:52:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.038 03:52:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.038 03:52:27 -- target/tls.sh@23 -- # psk= 00:20:09.038 03:52:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.038 03:52:27 -- target/tls.sh@28 -- # bdevperf_pid=2409274 00:20:09.038 03:52:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.038 03:52:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.038 03:52:27 -- target/tls.sh@31 -- # waitforlisten 2409274 /var/tmp/bdevperf.sock 00:20:09.038 03:52:27 -- common/autotest_common.sh@819 -- # '[' -z 2409274 ']' 00:20:09.038 03:52:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.038 03:52:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.038 03:52:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.038 03:52:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.038 03:52:27 -- common/autotest_common.sh@10 -- # set +x 00:20:09.297 [2024-07-14 03:52:27.985090] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:09.297 [2024-07-14 03:52:27.985182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409274 ] 00:20:09.297 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.297 [2024-07-14 03:52:28.045028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.297 [2024-07-14 03:52:28.128090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.231 03:52:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:10.231 03:52:28 -- common/autotest_common.sh@852 -- # return 0 00:20:10.231 03:52:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:10.231 [2024-07-14 03:52:29.152073] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:10.231 [2024-07-14 03:52:29.154007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d0ec0 (9): Bad file descriptor 00:20:10.231 [2024-07-14 03:52:29.155005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:10.231 [2024-07-14 03:52:29.155027] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:10.231 [2024-07-14 03:52:29.155049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:10.231 request: 00:20:10.231 { 00:20:10.231 "name": "TLSTEST", 00:20:10.231 "trtype": "tcp", 00:20:10.231 "traddr": "10.0.0.2", 00:20:10.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.231 "adrfam": "ipv4", 00:20:10.231 "trsvcid": "4420", 00:20:10.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.231 "method": "bdev_nvme_attach_controller", 00:20:10.231 "req_id": 1 00:20:10.231 } 00:20:10.231 Got JSON-RPC error response 00:20:10.231 response: 00:20:10.231 { 00:20:10.231 "code": -32602, 00:20:10.231 "message": "Invalid parameters" 00:20:10.231 } 00:20:10.231 03:52:29 -- target/tls.sh@36 -- # killprocess 2409274 00:20:10.231 03:52:29 -- common/autotest_common.sh@926 -- # '[' -z 2409274 ']' 00:20:10.231 03:52:29 -- common/autotest_common.sh@930 -- # kill -0 2409274 00:20:10.489 03:52:29 -- common/autotest_common.sh@931 -- # uname 00:20:10.489 03:52:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.489 03:52:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2409274 00:20:10.489 03:52:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:10.489 03:52:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:10.489 03:52:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2409274' 00:20:10.489 killing process with pid 2409274 00:20:10.489 03:52:29 -- common/autotest_common.sh@945 -- # kill 2409274 00:20:10.489 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.489 00:20:10.489 Latency(us) 00:20:10.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.489 =================================================================================================================== 00:20:10.489 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:10.489 03:52:29 -- common/autotest_common.sh@950 -- # wait 2409274 00:20:10.489 03:52:29 -- target/tls.sh@37 -- # return 1 00:20:10.489 03:52:29 -- common/autotest_common.sh@643 -- # es=1 00:20:10.489 03:52:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:10.489 03:52:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:10.489 03:52:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:10.489 03:52:29 -- target/tls.sh@167 -- # killprocess 2405424 00:20:10.489 03:52:29 -- common/autotest_common.sh@926 -- # '[' -z 2405424 ']' 00:20:10.489 03:52:29 -- common/autotest_common.sh@930 -- # kill -0 2405424 00:20:10.489 03:52:29 -- common/autotest_common.sh@931 -- # uname 00:20:10.489 03:52:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.489 03:52:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2405424 00:20:10.748 03:52:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:10.748 03:52:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:10.748 03:52:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2405424' 00:20:10.748 killing process with pid 2405424 00:20:10.748 03:52:29 -- common/autotest_common.sh@945 -- # kill 2405424 00:20:10.748 03:52:29 -- common/autotest_common.sh@950 -- # wait 2405424 00:20:10.748 03:52:29 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:10.748 03:52:29 -- target/tls.sh@49 -- # local key hash crc 00:20:10.748 03:52:29 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:10.748 03:52:29 -- target/tls.sh@51 -- # hash=02 00:20:10.748 03:52:29 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:10.748 03:52:29 -- target/tls.sh@52 -- # gzip -1 -c 00:20:10.748 03:52:29 -- target/tls.sh@52 -- # tail -c8 00:20:10.748 03:52:29 -- target/tls.sh@52 -- # head -c 4 00:20:11.008 03:52:29 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:11.008 03:52:29 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:11.008 03:52:29 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:11.008 03:52:29 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:11.008 03:52:29 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:11.008 03:52:29 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.008 03:52:29 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:11.008 03:52:29 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.008 03:52:29 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:11.008 03:52:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:11.008 03:52:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:11.008 03:52:29 -- common/autotest_common.sh@10 -- # set +x 00:20:11.008 03:52:29 -- nvmf/common.sh@469 -- # nvmfpid=2409568 00:20:11.008 03:52:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.008 03:52:29 -- nvmf/common.sh@470 -- # waitforlisten 2409568 00:20:11.008 03:52:29 -- common/autotest_common.sh@819 -- # '[' -z 2409568 ']' 00:20:11.008 03:52:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.008 03:52:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.008 03:52:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.008 03:52:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.008 03:52:29 -- common/autotest_common.sh@10 -- # set +x 00:20:11.008 [2024-07-14 03:52:29.750391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:11.008 [2024-07-14 03:52:29.750484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.008 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.008 [2024-07-14 03:52:29.825212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.008 [2024-07-14 03:52:29.918108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:11.008 [2024-07-14 03:52:29.918277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.008 [2024-07-14 03:52:29.918294] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.008 [2024-07-14 03:52:29.918307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.008 [2024-07-14 03:52:29.918335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.944 03:52:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:11.944 03:52:30 -- common/autotest_common.sh@852 -- # return 0 00:20:11.944 03:52:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:11.944 03:52:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:11.944 03:52:30 -- common/autotest_common.sh@10 -- # set +x 00:20:11.944 03:52:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.944 03:52:30 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.944 03:52:30 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.944 03:52:30 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.202 [2024-07-14 03:52:30.965280] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.202 03:52:30 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.459 03:52:31 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.717 [2024-07-14 03:52:31.522803] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.717 [2024-07-14 03:52:31.523045] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.717 03:52:31 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.974 malloc0 00:20:12.974 03:52:31 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.232 03:52:32 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:13.491 03:52:32 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:13.492 03:52:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:13.492 03:52:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:13.492 03:52:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:13.492 03:52:32 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:13.492 03:52:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.492 03:52:32 -- target/tls.sh@28 -- # bdevperf_pid=2409872 00:20:13.492 03:52:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.492 03:52:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.492 03:52:32 -- target/tls.sh@31 -- # waitforlisten 2409872 /var/tmp/bdevperf.sock 00:20:13.492 03:52:32 -- common/autotest_common.sh@819 -- # '[' -z 2409872 ']' 00:20:13.492 03:52:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.492 03:52:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.492 03:52:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.492 03:52:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.492 03:52:32 -- common/autotest_common.sh@10 -- # set +x 00:20:13.492 [2024-07-14 03:52:32.428710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:13.492 [2024-07-14 03:52:32.428781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409872 ] 00:20:13.752 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.752 [2024-07-14 03:52:32.488525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.752 [2024-07-14 03:52:32.573699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.688 03:52:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.688 03:52:33 -- common/autotest_common.sh@852 -- # return 0 00:20:14.688 03:52:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:14.947 [2024-07-14 03:52:33.634434] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.947 TLSTESTn1 00:20:14.947 03:52:33 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:14.947 Running I/O for 10 seconds... 00:20:27.157 00:20:27.157 Latency(us) 00:20:27.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.157 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:27.157 Verification LBA range: start 0x0 length 0x2000 00:20:27.157 TLSTESTn1 : 10.04 1943.20 7.59 0.00 0.00 65766.70 4830.25 67574.90 00:20:27.157 =================================================================================================================== 00:20:27.157 Total : 1943.20 7.59 0.00 0.00 65766.70 4830.25 67574.90 00:20:27.157 0 00:20:27.157 03:52:43 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:27.157 03:52:43 -- target/tls.sh@45 -- # killprocess 2409872 00:20:27.157 03:52:43 -- common/autotest_common.sh@926 -- # '[' -z 2409872 ']' 00:20:27.157 03:52:43 -- common/autotest_common.sh@930 -- # kill -0 2409872 00:20:27.157 03:52:43 -- common/autotest_common.sh@931 -- # uname 00:20:27.157 03:52:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.157 03:52:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2409872 00:20:27.157 03:52:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:27.157 03:52:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:27.157 03:52:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2409872' 00:20:27.157 killing process with pid 2409872 00:20:27.157 03:52:43 -- common/autotest_common.sh@945 -- # kill 2409872 00:20:27.157 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.157 00:20:27.157 Latency(us) 00:20:27.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.157 =================================================================================================================== 00:20:27.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.157 03:52:43 -- common/autotest_common.sh@950 -- # wait 2409872 00:20:27.157 03:52:44 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.157 03:52:44 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.157 03:52:44 -- common/autotest_common.sh@640 -- # local es=0 00:20:27.157 03:52:44 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.157 03:52:44 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:27.157 03:52:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:27.157 03:52:44 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:27.157 03:52:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:27.157 03:52:44 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.157 03:52:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.157 03:52:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.157 03:52:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.157 03:52:44 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:27.157 03:52:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.157 03:52:44 -- target/tls.sh@28 -- # bdevperf_pid=2411354 00:20:27.157 03:52:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.157 03:52:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.157 03:52:44 -- target/tls.sh@31 -- # waitforlisten 2411354 /var/tmp/bdevperf.sock 00:20:27.157 03:52:44 -- common/autotest_common.sh@819 -- # '[' -z 2411354 ']' 00:20:27.157 03:52:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.157 03:52:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:27.157 03:52:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.157 03:52:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:27.157 03:52:44 -- common/autotest_common.sh@10 -- # set +x 00:20:27.157 [2024-07-14 03:52:44.220335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:27.157 [2024-07-14 03:52:44.220412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411354 ] 00:20:27.157 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.157 [2024-07-14 03:52:44.277754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.157 [2024-07-14 03:52:44.357820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.157 03:52:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.157 03:52:45 -- common/autotest_common.sh@852 -- # return 0 00:20:27.157 03:52:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.157 [2024-07-14 03:52:45.369876] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.157 [2024-07-14 03:52:45.369938] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:27.157 request: 00:20:27.157 { 00:20:27.157 "name": "TLSTEST", 00:20:27.157 "trtype": "tcp", 00:20:27.157 "traddr": "10.0.0.2", 00:20:27.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.157 "adrfam": "ipv4", 00:20:27.157 "trsvcid": "4420", 00:20:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.157 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:27.157 "method": "bdev_nvme_attach_controller", 00:20:27.157 "req_id": 1 00:20:27.157 } 00:20:27.157 Got JSON-RPC error response 00:20:27.157 response: 00:20:27.157 { 00:20:27.157 "code": -22, 00:20:27.157 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:27.157 } 00:20:27.157 03:52:45 -- target/tls.sh@36 -- # killprocess 2411354 00:20:27.157 03:52:45 -- common/autotest_common.sh@926 -- # '[' -z 2411354 ']' 00:20:27.157 03:52:45 -- common/autotest_common.sh@930 -- # kill -0 2411354 00:20:27.157 03:52:45 -- common/autotest_common.sh@931 -- # uname 00:20:27.157 03:52:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.157 03:52:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2411354 00:20:27.157 03:52:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:27.157 03:52:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:27.157 03:52:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2411354' 00:20:27.157 killing process with pid 2411354 00:20:27.157 03:52:45 -- common/autotest_common.sh@945 -- # kill 2411354 00:20:27.157 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.157 00:20:27.157 Latency(us) 00:20:27.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.157 =================================================================================================================== 00:20:27.157 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.157 03:52:45 -- common/autotest_common.sh@950 -- # wait 2411354 00:20:27.157 03:52:45 -- target/tls.sh@37 -- # return 1 00:20:27.157 03:52:45 -- common/autotest_common.sh@643 -- # es=1 00:20:27.157 03:52:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:27.157 03:52:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:27.157 03:52:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:27.157 03:52:45 -- target/tls.sh@183 -- # killprocess 2409568 00:20:27.157 03:52:45 -- common/autotest_common.sh@926 -- # '[' -z 2409568 ']' 00:20:27.157 03:52:45 -- common/autotest_common.sh@930 -- # kill -0 2409568 00:20:27.157 03:52:45 -- common/autotest_common.sh@931 -- # uname 00:20:27.158 03:52:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.158 03:52:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2409568 00:20:27.158 03:52:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:27.158 03:52:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:27.158 03:52:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2409568' 00:20:27.158 killing process with pid 2409568 00:20:27.158 03:52:45 -- common/autotest_common.sh@945 -- # kill 2409568 00:20:27.158 03:52:45 -- common/autotest_common.sh@950 -- # wait 2409568 00:20:27.158 03:52:45 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:27.158 03:52:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:27.158 03:52:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:27.158 03:52:45 -- common/autotest_common.sh@10 -- # set +x 00:20:27.158 03:52:45 -- nvmf/common.sh@469 -- # nvmfpid=2411516 00:20:27.158 03:52:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.158 03:52:45 -- nvmf/common.sh@470 -- # waitforlisten 2411516 00:20:27.158 03:52:45 -- common/autotest_common.sh@819 -- # '[' -z 2411516 ']' 00:20:27.158 03:52:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.158 03:52:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:27.158 03:52:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.158 03:52:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:27.158 03:52:45 -- common/autotest_common.sh@10 -- # set +x 00:20:27.158 [2024-07-14 03:52:45.958101] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:27.158 [2024-07-14 03:52:45.958191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.158 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.158 [2024-07-14 03:52:46.026604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.415 [2024-07-14 03:52:46.118659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:27.415 [2024-07-14 03:52:46.118813] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.415 [2024-07-14 03:52:46.118834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.415 [2024-07-14 03:52:46.118859] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.415 [2024-07-14 03:52:46.118899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.983 03:52:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.983 03:52:46 -- common/autotest_common.sh@852 -- # return 0 00:20:27.983 03:52:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:27.983 03:52:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:27.983 03:52:46 -- common/autotest_common.sh@10 -- # set +x 00:20:27.983 03:52:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.983 03:52:46 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.983 03:52:46 -- common/autotest_common.sh@640 -- # local es=0 00:20:27.983 03:52:46 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.983 03:52:46 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:27.983 03:52:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:27.983 03:52:46 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:27.983 03:52:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:27.983 03:52:46 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.983 03:52:46 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.983 03:52:46 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.240 [2024-07-14 03:52:47.122677] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.240 03:52:47 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:28.498 03:52:47 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:28.756 [2024-07-14 03:52:47.591965] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.756 [2024-07-14 03:52:47.592208] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.756 03:52:47 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.014 malloc0 00:20:29.014 03:52:47 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.271 03:52:48 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:29.532 [2024-07-14 03:52:48.341255] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:29.532 [2024-07-14 03:52:48.341296] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:29.532 [2024-07-14 03:52:48.341322] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:29.532 request: 00:20:29.532 { 00:20:29.532 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.532 "host": "nqn.2016-06.io.spdk:host1", 00:20:29.532 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:29.532 "method": "nvmf_subsystem_add_host", 00:20:29.532 "req_id": 1 00:20:29.532 } 00:20:29.532 Got JSON-RPC error response 00:20:29.532 response: 00:20:29.532 { 00:20:29.532 "code": -32603, 00:20:29.532 "message": "Internal error" 00:20:29.532 } 00:20:29.532 03:52:48 -- common/autotest_common.sh@643 -- # es=1 00:20:29.532 03:52:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:29.532 03:52:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:29.532 03:52:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:29.532 03:52:48 -- target/tls.sh@189 -- # killprocess 2411516 00:20:29.532 03:52:48 -- common/autotest_common.sh@926 -- # '[' -z 2411516 ']' 00:20:29.532 03:52:48 -- common/autotest_common.sh@930 -- # kill -0 2411516 00:20:29.532 03:52:48 -- common/autotest_common.sh@931 -- # uname 00:20:29.532 03:52:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:29.532 03:52:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2411516 00:20:29.532 03:52:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:29.532 03:52:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:29.532 03:52:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2411516' 00:20:29.532 killing process with pid 2411516 00:20:29.532 03:52:48 -- common/autotest_common.sh@945 -- # kill 2411516 00:20:29.532 03:52:48 -- common/autotest_common.sh@950 -- # wait 2411516 00:20:29.820 03:52:48 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:29.820 03:52:48 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:29.820 03:52:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:29.820 03:52:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:29.820 03:52:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.820 03:52:48 -- nvmf/common.sh@469 -- # nvmfpid=2411949 00:20:29.820 03:52:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:29.820 03:52:48 -- nvmf/common.sh@470 -- # waitforlisten 2411949 00:20:29.820 03:52:48 -- common/autotest_common.sh@819 -- # '[' -z 2411949 ']' 00:20:29.820 03:52:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.820 03:52:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:29.820 03:52:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.820 03:52:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:29.821 03:52:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.821 [2024-07-14 03:52:48.686478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:29.821 [2024-07-14 03:52:48.686559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.821 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.079 [2024-07-14 03:52:48.752414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.079 [2024-07-14 03:52:48.838971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.080 [2024-07-14 03:52:48.839126] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.080 [2024-07-14 03:52:48.839150] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.080 [2024-07-14 03:52:48.839163] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.080 [2024-07-14 03:52:48.839190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.014 03:52:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:31.014 03:52:49 -- common/autotest_common.sh@852 -- # return 0 00:20:31.014 03:52:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:31.015 03:52:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:31.015 03:52:49 -- common/autotest_common.sh@10 -- # set +x 00:20:31.015 03:52:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.015 03:52:49 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:31.015 03:52:49 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:31.015 03:52:49 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:31.015 [2024-07-14 03:52:49.902173] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.015 03:52:49 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:31.273 03:52:50 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:31.533 [2024-07-14 03:52:50.427580] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.533 [2024-07-14 03:52:50.427799] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.533 03:52:50 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:31.792 malloc0 00:20:31.792 03:52:50 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.050 03:52:50 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:32.307 03:52:51 -- target/tls.sh@197 -- # bdevperf_pid=2412249 00:20:32.307 03:52:51 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.307 03:52:51 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.307 03:52:51 -- target/tls.sh@200 -- # waitforlisten 2412249 /var/tmp/bdevperf.sock 00:20:32.307 03:52:51 -- common/autotest_common.sh@819 -- # '[' -z 2412249 ']' 00:20:32.307 03:52:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.308 03:52:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:32.308 03:52:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.308 03:52:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:32.308 03:52:51 -- common/autotest_common.sh@10 -- # set +x 00:20:32.308 [2024-07-14 03:52:51.202061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:32.308 [2024-07-14 03:52:51.202131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412249 ] 00:20:32.308 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.567 [2024-07-14 03:52:51.259770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.567 [2024-07-14 03:52:51.349440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.501 03:52:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.501 03:52:52 -- common/autotest_common.sh@852 -- # return 0 00:20:33.501 03:52:52 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.501 [2024-07-14 03:52:52.432591] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.759 TLSTESTn1 00:20:33.759 03:52:52 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:34.018 03:52:52 -- target/tls.sh@205 -- # tgtconf='{ 00:20:34.018 "subsystems": [ 00:20:34.018 { 00:20:34.018 "subsystem": "iobuf", 00:20:34.018 "config": [ 00:20:34.018 { 00:20:34.018 "method": "iobuf_set_options", 00:20:34.018 "params": { 00:20:34.018 "small_pool_count": 8192, 00:20:34.018 "large_pool_count": 1024, 00:20:34.018 "small_bufsize": 8192, 00:20:34.018 "large_bufsize": 135168 00:20:34.018 } 00:20:34.018 } 00:20:34.018 ] 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "subsystem": "sock", 00:20:34.018 "config": [ 00:20:34.018 { 00:20:34.018 "method": "sock_impl_set_options", 00:20:34.018 "params": { 00:20:34.018 "impl_name": "posix", 00:20:34.018 "recv_buf_size": 2097152, 00:20:34.018 "send_buf_size": 2097152, 00:20:34.018 "enable_recv_pipe": true, 00:20:34.018 "enable_quickack": false, 00:20:34.018 "enable_placement_id": 0, 00:20:34.018 "enable_zerocopy_send_server": true, 00:20:34.018 "enable_zerocopy_send_client": false, 00:20:34.018 "zerocopy_threshold": 0, 00:20:34.018 "tls_version": 0, 00:20:34.018 "enable_ktls": false 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "sock_impl_set_options", 00:20:34.018 "params": { 00:20:34.018 "impl_name": "ssl", 00:20:34.018 "recv_buf_size": 4096, 00:20:34.018 "send_buf_size": 4096, 00:20:34.018 "enable_recv_pipe": true, 00:20:34.018 "enable_quickack": false, 00:20:34.018 "enable_placement_id": 0, 00:20:34.018 "enable_zerocopy_send_server": true, 00:20:34.018 "enable_zerocopy_send_client": false, 00:20:34.018 "zerocopy_threshold": 0, 00:20:34.018 "tls_version": 0, 00:20:34.018 "enable_ktls": false 00:20:34.018 } 00:20:34.018 } 00:20:34.018 ] 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "subsystem": "vmd", 00:20:34.018 "config": [] 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "subsystem": "accel", 00:20:34.018 "config": [ 00:20:34.018 { 00:20:34.018 "method": "accel_set_options", 00:20:34.018 "params": { 00:20:34.018 "small_cache_size": 128, 00:20:34.018 "large_cache_size": 16, 00:20:34.018 "task_count": 2048, 00:20:34.018 "sequence_count": 2048, 00:20:34.018 "buf_count": 2048 00:20:34.018 } 00:20:34.018 } 00:20:34.018 ] 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "subsystem": "bdev", 00:20:34.018 "config": [ 00:20:34.018 { 00:20:34.018 "method": "bdev_set_options", 00:20:34.018 "params": { 00:20:34.018 "bdev_io_pool_size": 65535, 00:20:34.018 "bdev_io_cache_size": 256, 00:20:34.018 "bdev_auto_examine": true, 00:20:34.018 "iobuf_small_cache_size": 128, 00:20:34.018 "iobuf_large_cache_size": 16 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "bdev_raid_set_options", 00:20:34.018 "params": { 00:20:34.018 "process_window_size_kb": 1024 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "bdev_iscsi_set_options", 00:20:34.018 "params": { 00:20:34.018 "timeout_sec": 30 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "bdev_nvme_set_options", 00:20:34.018 "params": { 00:20:34.018 "action_on_timeout": "none", 00:20:34.018 "timeout_us": 0, 00:20:34.018 "timeout_admin_us": 0, 00:20:34.018 "keep_alive_timeout_ms": 10000, 00:20:34.018 "transport_retry_count": 4, 00:20:34.018 "arbitration_burst": 0, 00:20:34.018 "low_priority_weight": 0, 00:20:34.018 "medium_priority_weight": 0, 00:20:34.018 "high_priority_weight": 0, 00:20:34.018 "nvme_adminq_poll_period_us": 10000, 00:20:34.018 "nvme_ioq_poll_period_us": 0, 00:20:34.018 "io_queue_requests": 0, 00:20:34.018 "delay_cmd_submit": true, 00:20:34.018 "bdev_retry_count": 3, 00:20:34.018 "transport_ack_timeout": 0, 00:20:34.018 "ctrlr_loss_timeout_sec": 0, 00:20:34.018 "reconnect_delay_sec": 0, 00:20:34.018 "fast_io_fail_timeout_sec": 0, 00:20:34.018 "generate_uuids": false, 00:20:34.018 "transport_tos": 0, 00:20:34.018 "io_path_stat": false, 00:20:34.018 "allow_accel_sequence": false 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "bdev_nvme_set_hotplug", 00:20:34.018 "params": { 00:20:34.018 "period_us": 100000, 00:20:34.018 "enable": false 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "bdev_malloc_create", 00:20:34.018 "params": { 00:20:34.018 "name": "malloc0", 00:20:34.018 "num_blocks": 8192, 00:20:34.018 "block_size": 4096, 00:20:34.018 "physical_block_size": 4096, 00:20:34.018 "uuid": "a1f7e5b2-8ba8-4b9e-9bb6-1077f3859fc0", 00:20:34.018 "optimal_io_boundary": 0 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "bdev_wait_for_examine" 00:20:34.018 } 00:20:34.018 ] 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "subsystem": "nbd", 00:20:34.018 "config": [] 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "subsystem": "scheduler", 00:20:34.018 "config": [ 00:20:34.018 { 00:20:34.018 "method": "framework_set_scheduler", 00:20:34.018 "params": { 00:20:34.018 "name": "static" 00:20:34.018 } 00:20:34.018 } 00:20:34.018 ] 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "subsystem": "nvmf", 00:20:34.018 "config": [ 00:20:34.018 { 00:20:34.018 "method": "nvmf_set_config", 00:20:34.018 "params": { 00:20:34.018 "discovery_filter": "match_any", 00:20:34.018 "admin_cmd_passthru": { 00:20:34.018 "identify_ctrlr": false 00:20:34.018 } 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "nvmf_set_max_subsystems", 00:20:34.018 "params": { 00:20:34.018 "max_subsystems": 1024 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "nvmf_set_crdt", 00:20:34.018 "params": { 00:20:34.018 "crdt1": 0, 00:20:34.018 "crdt2": 0, 00:20:34.018 "crdt3": 0 00:20:34.018 } 00:20:34.018 }, 00:20:34.018 { 00:20:34.018 "method": "nvmf_create_transport", 00:20:34.018 "params": { 00:20:34.018 "trtype": "TCP", 00:20:34.018 "max_queue_depth": 128, 00:20:34.018 "max_io_qpairs_per_ctrlr": 127, 00:20:34.018 "in_capsule_data_size": 4096, 00:20:34.018 "max_io_size": 131072, 00:20:34.018 "io_unit_size": 131072, 00:20:34.018 "max_aq_depth": 128, 00:20:34.018 "num_shared_buffers": 511, 00:20:34.018 "buf_cache_size": 4294967295, 00:20:34.018 "dif_insert_or_strip": false, 00:20:34.018 "zcopy": false, 00:20:34.018 "c2h_success": false, 00:20:34.018 "sock_priority": 0, 00:20:34.019 "abort_timeout_sec": 1 00:20:34.019 } 00:20:34.019 }, 00:20:34.019 { 00:20:34.019 "method": "nvmf_create_subsystem", 00:20:34.019 "params": { 00:20:34.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.019 "allow_any_host": false, 00:20:34.019 "serial_number": "SPDK00000000000001", 00:20:34.019 "model_number": "SPDK bdev Controller", 00:20:34.019 "max_namespaces": 10, 00:20:34.019 "min_cntlid": 1, 00:20:34.019 "max_cntlid": 65519, 00:20:34.019 "ana_reporting": false 00:20:34.019 } 00:20:34.019 }, 00:20:34.019 { 00:20:34.019 "method": "nvmf_subsystem_add_host", 00:20:34.019 "params": { 00:20:34.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.019 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.019 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:34.019 } 00:20:34.019 }, 00:20:34.019 { 00:20:34.019 "method": "nvmf_subsystem_add_ns", 00:20:34.019 "params": { 00:20:34.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.019 "namespace": { 00:20:34.019 "nsid": 1, 00:20:34.019 "bdev_name": "malloc0", 00:20:34.019 "nguid": "A1F7E5B28BA84B9E9BB61077F3859FC0", 00:20:34.019 "uuid": "a1f7e5b2-8ba8-4b9e-9bb6-1077f3859fc0" 00:20:34.019 } 00:20:34.019 } 00:20:34.019 }, 00:20:34.019 { 00:20:34.019 "method": "nvmf_subsystem_add_listener", 00:20:34.019 "params": { 00:20:34.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.019 "listen_address": { 00:20:34.019 "trtype": "TCP", 00:20:34.019 "adrfam": "IPv4", 00:20:34.019 "traddr": "10.0.0.2", 00:20:34.019 "trsvcid": "4420" 00:20:34.019 }, 00:20:34.019 "secure_channel": true 00:20:34.019 } 00:20:34.019 } 00:20:34.019 ] 00:20:34.019 } 00:20:34.019 ] 00:20:34.019 }' 00:20:34.019 03:52:52 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:34.279 03:52:53 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:34.279 "subsystems": [ 00:20:34.279 { 00:20:34.279 "subsystem": "iobuf", 00:20:34.279 "config": [ 00:20:34.279 { 00:20:34.279 "method": "iobuf_set_options", 00:20:34.279 "params": { 00:20:34.279 "small_pool_count": 8192, 00:20:34.279 "large_pool_count": 1024, 00:20:34.279 "small_bufsize": 8192, 00:20:34.279 "large_bufsize": 135168 00:20:34.279 } 00:20:34.279 } 00:20:34.279 ] 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "subsystem": "sock", 00:20:34.279 "config": [ 00:20:34.279 { 00:20:34.279 "method": "sock_impl_set_options", 00:20:34.279 "params": { 00:20:34.279 "impl_name": "posix", 00:20:34.279 "recv_buf_size": 2097152, 00:20:34.279 "send_buf_size": 2097152, 00:20:34.279 "enable_recv_pipe": true, 00:20:34.279 "enable_quickack": false, 00:20:34.279 "enable_placement_id": 0, 00:20:34.279 "enable_zerocopy_send_server": true, 00:20:34.279 "enable_zerocopy_send_client": false, 00:20:34.279 "zerocopy_threshold": 0, 00:20:34.279 "tls_version": 0, 00:20:34.279 "enable_ktls": false 00:20:34.279 } 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "method": "sock_impl_set_options", 00:20:34.279 "params": { 00:20:34.279 "impl_name": "ssl", 00:20:34.279 "recv_buf_size": 4096, 00:20:34.279 "send_buf_size": 4096, 00:20:34.279 "enable_recv_pipe": true, 00:20:34.279 "enable_quickack": false, 00:20:34.279 "enable_placement_id": 0, 00:20:34.279 "enable_zerocopy_send_server": true, 00:20:34.279 "enable_zerocopy_send_client": false, 00:20:34.279 "zerocopy_threshold": 0, 00:20:34.279 "tls_version": 0, 00:20:34.279 "enable_ktls": false 00:20:34.279 } 00:20:34.279 } 00:20:34.279 ] 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "subsystem": "vmd", 00:20:34.279 "config": [] 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "subsystem": "accel", 00:20:34.279 "config": [ 00:20:34.279 { 00:20:34.279 "method": "accel_set_options", 00:20:34.279 "params": { 00:20:34.279 "small_cache_size": 128, 00:20:34.279 "large_cache_size": 16, 00:20:34.279 "task_count": 2048, 00:20:34.279 "sequence_count": 2048, 00:20:34.279 "buf_count": 2048 00:20:34.279 } 00:20:34.279 } 00:20:34.279 ] 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "subsystem": "bdev", 00:20:34.279 "config": [ 00:20:34.279 { 00:20:34.279 "method": "bdev_set_options", 00:20:34.279 "params": { 00:20:34.279 "bdev_io_pool_size": 65535, 00:20:34.279 "bdev_io_cache_size": 256, 00:20:34.279 "bdev_auto_examine": true, 00:20:34.279 "iobuf_small_cache_size": 128, 00:20:34.279 "iobuf_large_cache_size": 16 00:20:34.279 } 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "method": "bdev_raid_set_options", 00:20:34.279 "params": { 00:20:34.279 "process_window_size_kb": 1024 00:20:34.279 } 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "method": "bdev_iscsi_set_options", 00:20:34.279 "params": { 00:20:34.279 "timeout_sec": 30 00:20:34.279 } 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "method": "bdev_nvme_set_options", 00:20:34.279 "params": { 00:20:34.279 "action_on_timeout": "none", 00:20:34.279 "timeout_us": 0, 00:20:34.279 "timeout_admin_us": 0, 00:20:34.279 "keep_alive_timeout_ms": 10000, 00:20:34.279 "transport_retry_count": 4, 00:20:34.279 "arbitration_burst": 0, 00:20:34.279 "low_priority_weight": 0, 00:20:34.279 "medium_priority_weight": 0, 00:20:34.279 "high_priority_weight": 0, 00:20:34.279 "nvme_adminq_poll_period_us": 10000, 00:20:34.279 "nvme_ioq_poll_period_us": 0, 00:20:34.279 "io_queue_requests": 512, 00:20:34.279 "delay_cmd_submit": true, 00:20:34.279 "bdev_retry_count": 3, 00:20:34.279 "transport_ack_timeout": 0, 00:20:34.279 "ctrlr_loss_timeout_sec": 0, 00:20:34.279 "reconnect_delay_sec": 0, 00:20:34.279 "fast_io_fail_timeout_sec": 0, 00:20:34.279 "generate_uuids": false, 00:20:34.279 "transport_tos": 0, 00:20:34.279 "io_path_stat": false, 00:20:34.279 "allow_accel_sequence": false 00:20:34.279 } 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "method": "bdev_nvme_attach_controller", 00:20:34.279 "params": { 00:20:34.279 "name": "TLSTEST", 00:20:34.279 "trtype": "TCP", 00:20:34.279 "adrfam": "IPv4", 00:20:34.279 "traddr": "10.0.0.2", 00:20:34.279 "trsvcid": "4420", 00:20:34.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.279 "prchk_reftag": false, 00:20:34.279 "prchk_guard": false, 00:20:34.279 "ctrlr_loss_timeout_sec": 0, 00:20:34.279 "reconnect_delay_sec": 0, 00:20:34.279 "fast_io_fail_timeout_sec": 0, 00:20:34.279 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:34.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.279 "hdgst": false, 00:20:34.279 "ddgst": false 00:20:34.279 } 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "method": "bdev_nvme_set_hotplug", 00:20:34.279 "params": { 00:20:34.279 "period_us": 100000, 00:20:34.279 "enable": false 00:20:34.279 } 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "method": "bdev_wait_for_examine" 00:20:34.279 } 00:20:34.279 ] 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "subsystem": "nbd", 00:20:34.279 "config": [] 00:20:34.279 } 00:20:34.279 ] 00:20:34.279 }' 00:20:34.279 03:52:53 -- target/tls.sh@208 -- # killprocess 2412249 00:20:34.279 03:52:53 -- common/autotest_common.sh@926 -- # '[' -z 2412249 ']' 00:20:34.279 03:52:53 -- common/autotest_common.sh@930 -- # kill -0 2412249 00:20:34.279 03:52:53 -- common/autotest_common.sh@931 -- # uname 00:20:34.279 03:52:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.279 03:52:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2412249 00:20:34.279 03:52:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:34.279 03:52:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:34.279 03:52:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2412249' 00:20:34.279 killing process with pid 2412249 00:20:34.279 03:52:53 -- common/autotest_common.sh@945 -- # kill 2412249 00:20:34.279 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.279 00:20:34.279 Latency(us) 00:20:34.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.279 =================================================================================================================== 00:20:34.280 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.280 03:52:53 -- common/autotest_common.sh@950 -- # wait 2412249 00:20:34.540 03:52:53 -- target/tls.sh@209 -- # killprocess 2411949 00:20:34.540 03:52:53 -- common/autotest_common.sh@926 -- # '[' -z 2411949 ']' 00:20:34.540 03:52:53 -- common/autotest_common.sh@930 -- # kill -0 2411949 00:20:34.540 03:52:53 -- common/autotest_common.sh@931 -- # uname 00:20:34.540 03:52:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.540 03:52:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2411949 00:20:34.540 03:52:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:34.540 03:52:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:34.540 03:52:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2411949' 00:20:34.540 killing process with pid 2411949 00:20:34.540 03:52:53 -- common/autotest_common.sh@945 -- # kill 2411949 00:20:34.540 03:52:53 -- common/autotest_common.sh@950 -- # wait 2411949 00:20:34.801 03:52:53 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:34.801 03:52:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:34.801 03:52:53 -- target/tls.sh@212 -- # echo '{ 00:20:34.801 "subsystems": [ 00:20:34.801 { 00:20:34.801 "subsystem": "iobuf", 00:20:34.801 "config": [ 00:20:34.801 { 00:20:34.801 "method": "iobuf_set_options", 00:20:34.801 "params": { 00:20:34.801 "small_pool_count": 8192, 00:20:34.801 "large_pool_count": 1024, 00:20:34.801 "small_bufsize": 8192, 00:20:34.801 "large_bufsize": 135168 00:20:34.801 } 00:20:34.801 } 00:20:34.801 ] 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "subsystem": "sock", 00:20:34.801 "config": [ 00:20:34.801 { 00:20:34.801 "method": "sock_impl_set_options", 00:20:34.801 "params": { 00:20:34.801 "impl_name": "posix", 00:20:34.801 "recv_buf_size": 2097152, 00:20:34.801 "send_buf_size": 2097152, 00:20:34.801 "enable_recv_pipe": true, 00:20:34.801 "enable_quickack": false, 00:20:34.801 "enable_placement_id": 0, 00:20:34.801 "enable_zerocopy_send_server": true, 00:20:34.801 "enable_zerocopy_send_client": false, 00:20:34.801 "zerocopy_threshold": 0, 00:20:34.801 "tls_version": 0, 00:20:34.801 "enable_ktls": false 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "sock_impl_set_options", 00:20:34.801 "params": { 00:20:34.801 "impl_name": "ssl", 00:20:34.801 "recv_buf_size": 4096, 00:20:34.801 "send_buf_size": 4096, 00:20:34.801 "enable_recv_pipe": true, 00:20:34.801 "enable_quickack": false, 00:20:34.801 "enable_placement_id": 0, 00:20:34.801 "enable_zerocopy_send_server": true, 00:20:34.801 "enable_zerocopy_send_client": false, 00:20:34.801 "zerocopy_threshold": 0, 00:20:34.801 "tls_version": 0, 00:20:34.801 "enable_ktls": false 00:20:34.801 } 00:20:34.801 } 00:20:34.801 ] 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "subsystem": "vmd", 00:20:34.801 "config": [] 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "subsystem": "accel", 00:20:34.801 "config": [ 00:20:34.801 { 00:20:34.801 "method": "accel_set_options", 00:20:34.801 "params": { 00:20:34.801 "small_cache_size": 128, 00:20:34.801 "large_cache_size": 16, 00:20:34.801 "task_count": 2048, 00:20:34.801 "sequence_count": 2048, 00:20:34.801 "buf_count": 2048 00:20:34.801 } 00:20:34.801 } 00:20:34.801 ] 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "subsystem": "bdev", 00:20:34.801 "config": [ 00:20:34.801 { 00:20:34.801 "method": "bdev_set_options", 00:20:34.801 "params": { 00:20:34.801 "bdev_io_pool_size": 65535, 00:20:34.801 "bdev_io_cache_size": 256, 00:20:34.801 "bdev_auto_examine": true, 00:20:34.801 "iobuf_small_cache_size": 128, 00:20:34.801 "iobuf_large_cache_size": 16 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "bdev_raid_set_options", 00:20:34.801 "params": { 00:20:34.801 "process_window_size_kb": 1024 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "bdev_iscsi_set_options", 00:20:34.801 "params": { 00:20:34.801 "timeout_sec": 30 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "bdev_nvme_set_options", 00:20:34.801 "params": { 00:20:34.801 "action_on_timeout": "none", 00:20:34.801 "timeout_us": 0, 00:20:34.801 "timeout_admin_us": 0, 00:20:34.801 "keep_alive_timeout_ms": 10000, 00:20:34.801 "transport_retry_count": 4, 00:20:34.801 "arbitration_burst": 0, 00:20:34.801 "low_priority_weight": 0, 00:20:34.801 "medium_priority_weight": 0, 00:20:34.801 "high_priority_weight": 0, 00:20:34.801 "nvme_adminq_poll_period_us": 10000, 00:20:34.801 "nvme_ioq_poll_period_us": 0, 00:20:34.801 "io_queue_requests": 0, 00:20:34.801 "delay_cmd_submit": true, 00:20:34.801 "bdev_retry_count": 3, 00:20:34.801 "transport_ack_timeout": 0, 00:20:34.801 "ctrlr_loss_timeout_sec": 0, 00:20:34.801 "reconnect_delay_sec": 0, 00:20:34.801 "fast_io_fail_timeout_sec": 0, 00:20:34.801 "generate_uuids": false, 00:20:34.801 "transport_tos": 0, 00:20:34.801 "io_path_stat": false, 00:20:34.801 "allow_accel_sequence": false 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "bdev_nvme_set_hotplug", 00:20:34.801 "params": { 00:20:34.801 "period_us": 100000, 00:20:34.801 "enable": false 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "bdev_malloc_create", 00:20:34.801 "params": { 00:20:34.801 "name": "malloc0", 00:20:34.801 "num_blocks": 8192, 00:20:34.801 "block_size": 4096, 00:20:34.801 "physical_block_size": 4096, 00:20:34.801 "uuid": "a1f7e5b2-8ba8-4b9e-9bb6-1077f3859fc0", 00:20:34.801 "optimal_io_boundary": 0 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "bdev_wait_for_examine" 00:20:34.801 } 00:20:34.801 ] 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "subsystem": "nbd", 00:20:34.801 "config": [] 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "subsystem": "scheduler", 00:20:34.801 "config": [ 00:20:34.801 { 00:20:34.801 "method": "framework_set_scheduler", 00:20:34.801 "params": { 00:20:34.801 "name": "static" 00:20:34.801 } 00:20:34.801 } 00:20:34.801 ] 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "subsystem": "nvmf", 00:20:34.801 "config": [ 00:20:34.801 { 00:20:34.801 "method": "nvmf_set_config", 00:20:34.801 "params": { 00:20:34.801 "discovery_filter": "match_any", 00:20:34.801 "admin_cmd_passthru": { 00:20:34.801 "identify_ctrlr": false 00:20:34.801 } 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "nvmf_set_max_subsystems", 00:20:34.801 "params": { 00:20:34.801 "max_subsystems": 1024 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "nvmf_set_crdt", 00:20:34.801 "params": { 00:20:34.801 "crdt1": 0, 00:20:34.801 "crdt2": 0, 00:20:34.801 "crdt3": 0 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "nvmf_create_transport", 00:20:34.801 "params": { 00:20:34.801 "trtype": "TCP", 00:20:34.801 "max_queue_depth": 128, 00:20:34.801 "max_io_qpairs_per_ctrlr": 127, 00:20:34.801 "in_capsule_data_size": 4096, 00:20:34.801 "max_io_size": 131072, 00:20:34.801 "io_unit_size": 131072, 00:20:34.801 "max_aq_depth": 128, 00:20:34.801 "num_shared_buffers": 511, 00:20:34.801 "buf_cache_size": 4294967295, 00:20:34.801 "dif_insert_or_strip": false, 00:20:34.801 "zcopy": false, 00:20:34.801 "c2h_success": false, 00:20:34.801 "sock_priority": 0, 00:20:34.801 "abort_timeout_sec": 1 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "nvmf_create_subsystem", 00:20:34.801 "params": { 00:20:34.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.801 "allow_any_host": false, 00:20:34.801 "serial_number": "SPDK00000000000001", 00:20:34.801 "model_number": "SPDK bdev Controller", 00:20:34.801 "max_namespaces": 10, 00:20:34.801 "min_cntlid": 1, 00:20:34.801 "max_cntlid": 65519, 00:20:34.801 "ana_reporting": false 00:20:34.801 } 00:20:34.801 }, 00:20:34.801 { 00:20:34.801 "method": "nvmf_subsystem_add_host", 00:20:34.801 "params": { 00:20:34.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.801 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.801 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:34.802 } 00:20:34.802 }, 00:20:34.802 { 00:20:34.802 "method": "nvmf_subsystem_add_ns", 00:20:34.802 "params": { 00:20:34.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.802 "namespace": { 00:20:34.802 "nsid": 1, 00:20:34.802 "bdev_name": "malloc0", 00:20:34.802 "nguid": "A1F7E5B28BA84B9E9BB61077F3859FC0", 00:20:34.802 "uuid": "a1f7e5b2-8ba8-4b9e-9bb6-1077f3859fc0" 00:20:34.802 } 00:20:34.802 } 00:20:34.802 }, 00:20:34.802 { 00:20:34.802 "method": "nvmf_subsystem_add_listener", 00:20:34.802 "params": { 00:20:34.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.802 "listen_address": { 00:20:34.802 "trtype": "TCP", 00:20:34.802 "adrfam": "IPv4", 00:20:34.802 "traddr": "10.0.0.2", 00:20:34.802 "trsvcid": "4420" 00:20:34.802 }, 00:20:34.802 "secure_channel": true 00:20:34.802 } 00:20:34.802 } 00:20:34.802 ] 00:20:34.802 } 00:20:34.802 ] 00:20:34.802 }' 00:20:34.802 03:52:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:34.802 03:52:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.802 03:52:53 -- nvmf/common.sh@469 -- # nvmfpid=2412548 00:20:34.802 03:52:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:34.802 03:52:53 -- nvmf/common.sh@470 -- # waitforlisten 2412548 00:20:34.802 03:52:53 -- common/autotest_common.sh@819 -- # '[' -z 2412548 ']' 00:20:34.802 03:52:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.802 03:52:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.802 03:52:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.802 03:52:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.802 03:52:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.802 [2024-07-14 03:52:53.697024] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:34.802 [2024-07-14 03:52:53.697119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.802 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.060 [2024-07-14 03:52:53.762081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.060 [2024-07-14 03:52:53.848193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.060 [2024-07-14 03:52:53.848361] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.060 [2024-07-14 03:52:53.848378] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.060 [2024-07-14 03:52:53.848390] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.060 [2024-07-14 03:52:53.848423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.318 [2024-07-14 03:52:54.071200] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.318 [2024-07-14 03:52:54.103235] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.318 [2024-07-14 03:52:54.103471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.886 03:52:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:35.886 03:52:54 -- common/autotest_common.sh@852 -- # return 0 00:20:35.886 03:52:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:35.887 03:52:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:35.887 03:52:54 -- common/autotest_common.sh@10 -- # set +x 00:20:35.887 03:52:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.887 03:52:54 -- target/tls.sh@216 -- # bdevperf_pid=2412701 00:20:35.887 03:52:54 -- target/tls.sh@217 -- # waitforlisten 2412701 /var/tmp/bdevperf.sock 00:20:35.887 03:52:54 -- common/autotest_common.sh@819 -- # '[' -z 2412701 ']' 00:20:35.887 03:52:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.887 03:52:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.887 03:52:54 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:35.887 03:52:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.887 03:52:54 -- target/tls.sh@213 -- # echo '{ 00:20:35.887 "subsystems": [ 00:20:35.887 { 00:20:35.887 "subsystem": "iobuf", 00:20:35.887 "config": [ 00:20:35.887 { 00:20:35.887 "method": "iobuf_set_options", 00:20:35.887 "params": { 00:20:35.887 "small_pool_count": 8192, 00:20:35.887 "large_pool_count": 1024, 00:20:35.887 "small_bufsize": 8192, 00:20:35.887 "large_bufsize": 135168 00:20:35.887 } 00:20:35.887 } 00:20:35.887 ] 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "subsystem": "sock", 00:20:35.887 "config": [ 00:20:35.887 { 00:20:35.887 "method": "sock_impl_set_options", 00:20:35.887 "params": { 00:20:35.887 "impl_name": "posix", 00:20:35.887 "recv_buf_size": 2097152, 00:20:35.887 "send_buf_size": 2097152, 00:20:35.887 "enable_recv_pipe": true, 00:20:35.887 "enable_quickack": false, 00:20:35.887 "enable_placement_id": 0, 00:20:35.887 "enable_zerocopy_send_server": true, 00:20:35.887 "enable_zerocopy_send_client": false, 00:20:35.887 "zerocopy_threshold": 0, 00:20:35.887 "tls_version": 0, 00:20:35.887 "enable_ktls": false 00:20:35.887 } 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "method": "sock_impl_set_options", 00:20:35.887 "params": { 00:20:35.887 "impl_name": "ssl", 00:20:35.887 "recv_buf_size": 4096, 00:20:35.887 "send_buf_size": 4096, 00:20:35.887 "enable_recv_pipe": true, 00:20:35.887 "enable_quickack": false, 00:20:35.887 "enable_placement_id": 0, 00:20:35.887 "enable_zerocopy_send_server": true, 00:20:35.887 "enable_zerocopy_send_client": false, 00:20:35.887 "zerocopy_threshold": 0, 00:20:35.887 "tls_version": 0, 00:20:35.887 "enable_ktls": false 00:20:35.887 } 00:20:35.887 } 00:20:35.887 ] 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "subsystem": "vmd", 00:20:35.887 "config": [] 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "subsystem": "accel", 00:20:35.887 "config": [ 00:20:35.887 { 00:20:35.887 "method": "accel_set_options", 00:20:35.887 "params": { 00:20:35.887 "small_cache_size": 128, 00:20:35.887 "large_cache_size": 16, 00:20:35.887 "task_count": 2048, 00:20:35.887 "sequence_count": 2048, 00:20:35.887 "buf_count": 2048 00:20:35.887 } 00:20:35.887 } 00:20:35.887 ] 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "subsystem": "bdev", 00:20:35.887 "config": [ 00:20:35.887 { 00:20:35.887 "method": "bdev_set_options", 00:20:35.887 "params": { 00:20:35.887 "bdev_io_pool_size": 65535, 00:20:35.887 "bdev_io_cache_size": 256, 00:20:35.887 "bdev_auto_examine": true, 00:20:35.887 "iobuf_small_cache_size": 128, 00:20:35.887 "iobuf_large_cache_size": 16 00:20:35.887 } 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "method": "bdev_raid_set_options", 00:20:35.887 "params": { 00:20:35.887 "process_window_size_kb": 1024 00:20:35.887 } 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "method": "bdev_iscsi_set_options", 00:20:35.887 "params": { 00:20:35.887 "timeout_sec": 30 00:20:35.887 } 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "method": "bdev_nvme_set_options", 00:20:35.887 "params": { 00:20:35.887 "action_on_timeout": "none", 00:20:35.887 "timeout_us": 0, 00:20:35.887 "timeout_admin_us": 0, 00:20:35.887 "keep_alive_timeout_ms": 10000, 00:20:35.887 "transport_retry_count": 4, 00:20:35.887 "arbitration_burst": 0, 00:20:35.887 "low_priority_weight": 0, 00:20:35.887 "medium_priority_weight": 0, 00:20:35.887 "high_priority_weight": 0, 00:20:35.887 "nvme_adminq_poll_period_us": 10000, 00:20:35.887 "nvme_ioq_poll_period_us": 0, 00:20:35.887 "io_queue_requests": 512, 00:20:35.887 "delay_cmd_submit": true, 00:20:35.887 "bdev_retry_count": 3, 00:20:35.887 "transport_ack_timeout": 0, 00:20:35.887 "ctrlr_loss_timeout_sec": 0, 00:20:35.887 "reconnect_delay_sec": 0, 00:20:35.887 "fast_io_fail_timeout_sec": 0, 00:20:35.887 "generate_uuids": false, 00:20:35.887 "transport_tos": 0, 00:20:35.887 "io_path_stat": false, 00:20:35.887 "allow_accel_sequence": false 00:20:35.887 } 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "method": "bdev_nvme_attach_controller", 00:20:35.887 "params": { 00:20:35.887 "name": "TLSTEST", 00:20:35.887 "trtype": "TCP", 00:20:35.887 "adrfam": "IPv4", 00:20:35.887 "traddr": "10.0.0.2", 00:20:35.887 "trsvcid": "4420", 00:20:35.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.887 "prchk_reftag": false, 00:20:35.887 "prchk_guard": false, 00:20:35.887 "ctrlr_loss_timeout_sec": 0, 00:20:35.887 "reconnect_delay_sec": 0, 00:20:35.887 "fast_io_fail_timeout_sec": 0, 00:20:35.887 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:35.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.887 "hdgst": false, 00:20:35.887 "ddgst": false 00:20:35.887 } 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "method": "bdev_nvme_set_hotplug", 00:20:35.887 "params": { 00:20:35.887 "period_us": 100000, 00:20:35.887 "enable": false 00:20:35.887 } 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "method": "bdev_wait_for_examine" 00:20:35.887 } 00:20:35.887 ] 00:20:35.887 }, 00:20:35.887 { 00:20:35.887 "subsystem": "nbd", 00:20:35.887 "config": [] 00:20:35.887 } 00:20:35.887 ] 00:20:35.887 }' 00:20:35.887 03:52:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.887 03:52:54 -- common/autotest_common.sh@10 -- # set +x 00:20:35.887 [2024-07-14 03:52:54.700839] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:35.887 [2024-07-14 03:52:54.700957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2412701 ] 00:20:35.887 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.887 [2024-07-14 03:52:54.760701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.146 [2024-07-14 03:52:54.843493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.146 [2024-07-14 03:52:55.002004] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.714 03:52:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.714 03:52:55 -- common/autotest_common.sh@852 -- # return 0 00:20:36.714 03:52:55 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:36.973 Running I/O for 10 seconds... 00:20:46.957 00:20:46.957 Latency(us) 00:20:46.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.957 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:46.957 Verification LBA range: start 0x0 length 0x2000 00:20:46.957 TLSTESTn1 : 10.04 1910.33 7.46 0.00 0.00 66903.36 9126.49 69905.07 00:20:46.957 =================================================================================================================== 00:20:46.957 Total : 1910.33 7.46 0.00 0.00 66903.36 9126.49 69905.07 00:20:46.957 0 00:20:46.957 03:53:05 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.957 03:53:05 -- target/tls.sh@223 -- # killprocess 2412701 00:20:46.957 03:53:05 -- common/autotest_common.sh@926 -- # '[' -z 2412701 ']' 00:20:46.957 03:53:05 -- common/autotest_common.sh@930 -- # kill -0 2412701 00:20:46.957 03:53:05 -- common/autotest_common.sh@931 -- # uname 00:20:46.957 03:53:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:46.957 03:53:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2412701 00:20:46.957 03:53:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:46.957 03:53:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:46.957 03:53:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2412701' 00:20:46.957 killing process with pid 2412701 00:20:46.957 03:53:05 -- common/autotest_common.sh@945 -- # kill 2412701 00:20:46.957 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.957 00:20:46.957 Latency(us) 00:20:46.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.957 =================================================================================================================== 00:20:46.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.957 03:53:05 -- common/autotest_common.sh@950 -- # wait 2412701 00:20:47.315 03:53:06 -- target/tls.sh@224 -- # killprocess 2412548 00:20:47.315 03:53:06 -- common/autotest_common.sh@926 -- # '[' -z 2412548 ']' 00:20:47.315 03:53:06 -- common/autotest_common.sh@930 -- # kill -0 2412548 00:20:47.315 03:53:06 -- common/autotest_common.sh@931 -- # uname 00:20:47.315 03:53:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:47.315 03:53:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2412548 00:20:47.315 03:53:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:47.315 03:53:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:47.315 03:53:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2412548' 00:20:47.315 killing process with pid 2412548 00:20:47.315 03:53:06 -- common/autotest_common.sh@945 -- # kill 2412548 00:20:47.315 03:53:06 -- common/autotest_common.sh@950 -- # wait 2412548 00:20:47.573 03:53:06 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:47.573 03:53:06 -- target/tls.sh@227 -- # cleanup 00:20:47.573 03:53:06 -- target/tls.sh@15 -- # process_shm --id 0 00:20:47.573 03:53:06 -- common/autotest_common.sh@796 -- # type=--id 00:20:47.573 03:53:06 -- common/autotest_common.sh@797 -- # id=0 00:20:47.573 03:53:06 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:47.573 03:53:06 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:47.573 03:53:06 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:47.573 03:53:06 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:47.573 03:53:06 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:47.573 03:53:06 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:47.573 nvmf_trace.0 00:20:47.573 03:53:06 -- common/autotest_common.sh@811 -- # return 0 00:20:47.573 03:53:06 -- target/tls.sh@16 -- # killprocess 2412701 00:20:47.573 03:53:06 -- common/autotest_common.sh@926 -- # '[' -z 2412701 ']' 00:20:47.573 03:53:06 -- common/autotest_common.sh@930 -- # kill -0 2412701 00:20:47.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2412701) - No such process 00:20:47.573 03:53:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2412701 is not found' 00:20:47.573 Process with pid 2412701 is not found 00:20:47.573 03:53:06 -- target/tls.sh@17 -- # nvmftestfini 00:20:47.573 03:53:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:47.573 03:53:06 -- nvmf/common.sh@116 -- # sync 00:20:47.573 03:53:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:47.573 03:53:06 -- nvmf/common.sh@119 -- # set +e 00:20:47.573 03:53:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:47.573 03:53:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:47.573 rmmod nvme_tcp 00:20:47.573 rmmod nvme_fabrics 00:20:47.573 rmmod nvme_keyring 00:20:47.573 03:53:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:47.573 03:53:06 -- nvmf/common.sh@123 -- # set -e 00:20:47.573 03:53:06 -- nvmf/common.sh@124 -- # return 0 00:20:47.573 03:53:06 -- nvmf/common.sh@477 -- # '[' -n 2412548 ']' 00:20:47.573 03:53:06 -- nvmf/common.sh@478 -- # killprocess 2412548 00:20:47.573 03:53:06 -- common/autotest_common.sh@926 -- # '[' -z 2412548 ']' 00:20:47.573 03:53:06 -- common/autotest_common.sh@930 -- # kill -0 2412548 00:20:47.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2412548) - No such process 00:20:47.573 03:53:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2412548 is not found' 00:20:47.573 Process with pid 2412548 is not found 00:20:47.573 03:53:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:47.573 03:53:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:47.573 03:53:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:47.573 03:53:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:47.573 03:53:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:47.573 03:53:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.573 03:53:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.573 03:53:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.112 03:53:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:50.112 03:53:08 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:50.112 00:20:50.112 real 1m14.321s 00:20:50.112 user 1m57.135s 00:20:50.112 sys 0m26.430s 00:20:50.112 03:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.112 03:53:08 -- common/autotest_common.sh@10 -- # set +x 00:20:50.112 ************************************ 00:20:50.112 END TEST nvmf_tls 00:20:50.112 ************************************ 00:20:50.112 03:53:08 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:50.112 03:53:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:50.112 03:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:50.112 03:53:08 -- common/autotest_common.sh@10 -- # set +x 00:20:50.112 ************************************ 00:20:50.112 START TEST nvmf_fips 00:20:50.112 ************************************ 00:20:50.112 03:53:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:50.112 * Looking for test storage... 00:20:50.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:50.112 03:53:08 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.112 03:53:08 -- nvmf/common.sh@7 -- # uname -s 00:20:50.112 03:53:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.112 03:53:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.112 03:53:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.112 03:53:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.112 03:53:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.112 03:53:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.112 03:53:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.112 03:53:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.112 03:53:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.112 03:53:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.112 03:53:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.112 03:53:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.112 03:53:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.112 03:53:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.112 03:53:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.112 03:53:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.112 03:53:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.112 03:53:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.112 03:53:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.112 03:53:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.112 03:53:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.112 03:53:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.112 03:53:08 -- paths/export.sh@5 -- # export PATH 00:20:50.112 03:53:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.112 03:53:08 -- nvmf/common.sh@46 -- # : 0 00:20:50.112 03:53:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:50.112 03:53:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:50.112 03:53:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:50.112 03:53:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.112 03:53:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.112 03:53:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:50.112 03:53:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:50.112 03:53:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:50.112 03:53:08 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:50.113 03:53:08 -- fips/fips.sh@89 -- # check_openssl_version 00:20:50.113 03:53:08 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:50.113 03:53:08 -- fips/fips.sh@85 -- # openssl version 00:20:50.113 03:53:08 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:50.113 03:53:08 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:50.113 03:53:08 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:50.113 03:53:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:50.113 03:53:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:50.113 03:53:08 -- scripts/common.sh@335 -- # IFS=.-: 00:20:50.113 03:53:08 -- scripts/common.sh@335 -- # read -ra ver1 00:20:50.113 03:53:08 -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.113 03:53:08 -- scripts/common.sh@336 -- # read -ra ver2 00:20:50.113 03:53:08 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:50.113 03:53:08 -- scripts/common.sh@339 -- # ver1_l=3 00:20:50.113 03:53:08 -- scripts/common.sh@340 -- # ver2_l=3 00:20:50.113 03:53:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:50.113 03:53:08 -- scripts/common.sh@343 -- # case "$op" in 00:20:50.113 03:53:08 -- scripts/common.sh@347 -- # : 1 00:20:50.113 03:53:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:50.113 03:53:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.113 03:53:08 -- scripts/common.sh@364 -- # decimal 3 00:20:50.113 03:53:08 -- scripts/common.sh@352 -- # local d=3 00:20:50.113 03:53:08 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:50.113 03:53:08 -- scripts/common.sh@354 -- # echo 3 00:20:50.113 03:53:08 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:50.113 03:53:08 -- scripts/common.sh@365 -- # decimal 3 00:20:50.113 03:53:08 -- scripts/common.sh@352 -- # local d=3 00:20:50.113 03:53:08 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:50.113 03:53:08 -- scripts/common.sh@354 -- # echo 3 00:20:50.113 03:53:08 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:50.113 03:53:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:50.113 03:53:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:50.113 03:53:08 -- scripts/common.sh@363 -- # (( v++ )) 00:20:50.113 03:53:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.113 03:53:08 -- scripts/common.sh@364 -- # decimal 0 00:20:50.113 03:53:08 -- scripts/common.sh@352 -- # local d=0 00:20:50.113 03:53:08 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.113 03:53:08 -- scripts/common.sh@354 -- # echo 0 00:20:50.113 03:53:08 -- scripts/common.sh@364 -- # ver1[v]=0 00:20:50.113 03:53:08 -- scripts/common.sh@365 -- # decimal 0 00:20:50.113 03:53:08 -- scripts/common.sh@352 -- # local d=0 00:20:50.113 03:53:08 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.113 03:53:08 -- scripts/common.sh@354 -- # echo 0 00:20:50.113 03:53:08 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:50.113 03:53:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:50.113 03:53:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:50.113 03:53:08 -- scripts/common.sh@363 -- # (( v++ )) 00:20:50.113 03:53:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.113 03:53:08 -- scripts/common.sh@364 -- # decimal 9 00:20:50.113 03:53:08 -- scripts/common.sh@352 -- # local d=9 00:20:50.113 03:53:08 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:50.113 03:53:08 -- scripts/common.sh@354 -- # echo 9 00:20:50.113 03:53:08 -- scripts/common.sh@364 -- # ver1[v]=9 00:20:50.113 03:53:08 -- scripts/common.sh@365 -- # decimal 0 00:20:50.113 03:53:08 -- scripts/common.sh@352 -- # local d=0 00:20:50.113 03:53:08 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:50.113 03:53:08 -- scripts/common.sh@354 -- # echo 0 00:20:50.113 03:53:08 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:50.113 03:53:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:50.113 03:53:08 -- scripts/common.sh@366 -- # return 0 00:20:50.113 03:53:08 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:50.113 03:53:08 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:50.113 03:53:08 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:50.113 03:53:08 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:50.113 03:53:08 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:50.113 03:53:08 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:50.113 03:53:08 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:50.113 03:53:08 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:50.113 03:53:08 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:50.113 03:53:08 -- fips/fips.sh@114 -- # build_openssl_config 00:20:50.113 03:53:08 -- fips/fips.sh@37 -- # cat 00:20:50.113 03:53:08 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:50.113 03:53:08 -- fips/fips.sh@58 -- # cat - 00:20:50.113 03:53:08 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:50.113 03:53:08 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:50.113 03:53:08 -- fips/fips.sh@117 -- # mapfile -t providers 00:20:50.113 03:53:08 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:20:50.113 03:53:08 -- fips/fips.sh@117 -- # openssl list -providers 00:20:50.113 03:53:08 -- fips/fips.sh@117 -- # grep name 00:20:50.113 03:53:08 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:50.113 03:53:08 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:50.113 03:53:08 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:50.113 03:53:08 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:50.113 03:53:08 -- fips/fips.sh@128 -- # : 00:20:50.113 03:53:08 -- common/autotest_common.sh@640 -- # local es=0 00:20:50.113 03:53:08 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:50.113 03:53:08 -- common/autotest_common.sh@628 -- # local arg=openssl 00:20:50.113 03:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:50.113 03:53:08 -- common/autotest_common.sh@632 -- # type -t openssl 00:20:50.113 03:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:50.113 03:53:08 -- common/autotest_common.sh@634 -- # type -P openssl 00:20:50.113 03:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:50.113 03:53:08 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:20:50.113 03:53:08 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:20:50.113 03:53:08 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:20:50.113 Error setting digest 00:20:50.113 001224333C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:50.113 001224333C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:50.113 03:53:08 -- common/autotest_common.sh@643 -- # es=1 00:20:50.113 03:53:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:50.113 03:53:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:50.113 03:53:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:50.113 03:53:08 -- fips/fips.sh@131 -- # nvmftestinit 00:20:50.113 03:53:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:50.113 03:53:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.113 03:53:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:50.113 03:53:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:50.113 03:53:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:50.113 03:53:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.113 03:53:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.113 03:53:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.113 03:53:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:50.113 03:53:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:50.113 03:53:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:50.113 03:53:08 -- common/autotest_common.sh@10 -- # set +x 00:20:52.059 03:53:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:52.059 03:53:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:52.059 03:53:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:52.059 03:53:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:52.059 03:53:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:52.059 03:53:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:52.059 03:53:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:52.059 03:53:10 -- nvmf/common.sh@294 -- # net_devs=() 00:20:52.059 03:53:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:52.060 03:53:10 -- nvmf/common.sh@295 -- # e810=() 00:20:52.060 03:53:10 -- nvmf/common.sh@295 -- # local -ga e810 00:20:52.060 03:53:10 -- nvmf/common.sh@296 -- # x722=() 00:20:52.060 03:53:10 -- nvmf/common.sh@296 -- # local -ga x722 00:20:52.060 03:53:10 -- nvmf/common.sh@297 -- # mlx=() 00:20:52.060 03:53:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:52.060 03:53:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.060 03:53:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:52.060 03:53:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:52.060 03:53:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:52.060 03:53:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.060 03:53:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:52.060 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:52.060 03:53:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.060 03:53:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:52.060 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:52.060 03:53:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:52.060 03:53:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.060 03:53:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.060 03:53:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.060 03:53:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.060 03:53:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:52.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:52.060 03:53:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.060 03:53:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.060 03:53:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.060 03:53:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.060 03:53:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.060 03:53:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:52.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:52.060 03:53:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.060 03:53:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:52.060 03:53:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:52.060 03:53:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:52.060 03:53:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.060 03:53:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.060 03:53:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.060 03:53:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:52.060 03:53:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.060 03:53:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.060 03:53:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:52.060 03:53:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.060 03:53:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.060 03:53:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:52.060 03:53:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:52.060 03:53:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.060 03:53:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.060 03:53:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.060 03:53:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.060 03:53:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:52.060 03:53:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.060 03:53:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.060 03:53:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.060 03:53:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:52.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:20:52.060 00:20:52.060 --- 10.0.0.2 ping statistics --- 00:20:52.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.060 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:52.060 03:53:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:20:52.060 00:20:52.060 --- 10.0.0.1 ping statistics --- 00:20:52.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.060 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:52.060 03:53:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.060 03:53:10 -- nvmf/common.sh@410 -- # return 0 00:20:52.060 03:53:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:52.060 03:53:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.060 03:53:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:52.060 03:53:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.060 03:53:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:52.060 03:53:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:52.060 03:53:10 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:52.060 03:53:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:52.060 03:53:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:52.060 03:53:10 -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 03:53:10 -- nvmf/common.sh@469 -- # nvmfpid=2416038 00:20:52.060 03:53:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:52.060 03:53:10 -- nvmf/common.sh@470 -- # waitforlisten 2416038 00:20:52.060 03:53:10 -- common/autotest_common.sh@819 -- # '[' -z 2416038 ']' 00:20:52.060 03:53:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.060 03:53:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.060 03:53:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.060 03:53:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.060 03:53:10 -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 [2024-07-14 03:53:10.993167] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:52.060 [2024-07-14 03:53:10.993263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.320 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.320 [2024-07-14 03:53:11.066028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.320 [2024-07-14 03:53:11.154755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:52.320 [2024-07-14 03:53:11.154929] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.320 [2024-07-14 03:53:11.154950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.320 [2024-07-14 03:53:11.154964] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.320 [2024-07-14 03:53:11.154994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.257 03:53:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:53.257 03:53:11 -- common/autotest_common.sh@852 -- # return 0 00:20:53.257 03:53:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:53.257 03:53:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:53.257 03:53:11 -- common/autotest_common.sh@10 -- # set +x 00:20:53.257 03:53:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.257 03:53:11 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:53.257 03:53:11 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.257 03:53:11 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.257 03:53:11 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.257 03:53:11 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.257 03:53:11 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.257 03:53:11 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:53.257 03:53:11 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.257 [2024-07-14 03:53:12.151720] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.257 [2024-07-14 03:53:12.167690] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.257 [2024-07-14 03:53:12.167946] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.518 malloc0 00:20:53.518 03:53:12 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.518 03:53:12 -- fips/fips.sh@148 -- # bdevperf_pid=2416320 00:20:53.518 03:53:12 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.518 03:53:12 -- fips/fips.sh@149 -- # waitforlisten 2416320 /var/tmp/bdevperf.sock 00:20:53.518 03:53:12 -- common/autotest_common.sh@819 -- # '[' -z 2416320 ']' 00:20:53.518 03:53:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.518 03:53:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:53.518 03:53:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.518 03:53:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:53.518 03:53:12 -- common/autotest_common.sh@10 -- # set +x 00:20:53.518 [2024-07-14 03:53:12.285003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:53.518 [2024-07-14 03:53:12.285091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416320 ] 00:20:53.518 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.518 [2024-07-14 03:53:12.346770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.518 [2024-07-14 03:53:12.434424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.468 03:53:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:54.468 03:53:13 -- common/autotest_common.sh@852 -- # return 0 00:20:54.468 03:53:13 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:54.727 [2024-07-14 03:53:13.467293] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.727 TLSTESTn1 00:20:54.727 03:53:13 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.985 Running I/O for 10 seconds... 00:21:04.975 00:21:04.975 Latency(us) 00:21:04.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.975 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.975 Verification LBA range: start 0x0 length 0x2000 00:21:04.975 TLSTESTn1 : 10.03 1916.60 7.49 0.00 0.00 66680.92 5097.24 72235.24 00:21:04.975 =================================================================================================================== 00:21:04.975 Total : 1916.60 7.49 0.00 0.00 66680.92 5097.24 72235.24 00:21:04.975 0 00:21:04.975 03:53:23 -- fips/fips.sh@1 -- # cleanup 00:21:04.975 03:53:23 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:04.975 03:53:23 -- common/autotest_common.sh@796 -- # type=--id 00:21:04.975 03:53:23 -- common/autotest_common.sh@797 -- # id=0 00:21:04.975 03:53:23 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:04.975 03:53:23 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:04.975 03:53:23 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:04.975 03:53:23 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:04.975 03:53:23 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:04.975 03:53:23 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:04.975 nvmf_trace.0 00:21:04.975 03:53:23 -- common/autotest_common.sh@811 -- # return 0 00:21:04.975 03:53:23 -- fips/fips.sh@16 -- # killprocess 2416320 00:21:04.975 03:53:23 -- common/autotest_common.sh@926 -- # '[' -z 2416320 ']' 00:21:04.975 03:53:23 -- common/autotest_common.sh@930 -- # kill -0 2416320 00:21:04.975 03:53:23 -- common/autotest_common.sh@931 -- # uname 00:21:04.975 03:53:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:04.975 03:53:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2416320 00:21:04.975 03:53:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:04.975 03:53:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:04.976 03:53:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2416320' 00:21:04.976 killing process with pid 2416320 00:21:04.976 03:53:23 -- common/autotest_common.sh@945 -- # kill 2416320 00:21:04.976 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.976 00:21:04.976 Latency(us) 00:21:04.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.976 =================================================================================================================== 00:21:04.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.976 03:53:23 -- common/autotest_common.sh@950 -- # wait 2416320 00:21:05.234 03:53:24 -- fips/fips.sh@17 -- # nvmftestfini 00:21:05.234 03:53:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:05.234 03:53:24 -- nvmf/common.sh@116 -- # sync 00:21:05.234 03:53:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:05.234 03:53:24 -- nvmf/common.sh@119 -- # set +e 00:21:05.234 03:53:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:05.234 03:53:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:05.234 rmmod nvme_tcp 00:21:05.234 rmmod nvme_fabrics 00:21:05.234 rmmod nvme_keyring 00:21:05.234 03:53:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:05.234 03:53:24 -- nvmf/common.sh@123 -- # set -e 00:21:05.234 03:53:24 -- nvmf/common.sh@124 -- # return 0 00:21:05.234 03:53:24 -- nvmf/common.sh@477 -- # '[' -n 2416038 ']' 00:21:05.234 03:53:24 -- nvmf/common.sh@478 -- # killprocess 2416038 00:21:05.234 03:53:24 -- common/autotest_common.sh@926 -- # '[' -z 2416038 ']' 00:21:05.234 03:53:24 -- common/autotest_common.sh@930 -- # kill -0 2416038 00:21:05.234 03:53:24 -- common/autotest_common.sh@931 -- # uname 00:21:05.234 03:53:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:05.234 03:53:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2416038 00:21:05.234 03:53:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:05.234 03:53:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:05.234 03:53:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2416038' 00:21:05.234 killing process with pid 2416038 00:21:05.234 03:53:24 -- common/autotest_common.sh@945 -- # kill 2416038 00:21:05.234 03:53:24 -- common/autotest_common.sh@950 -- # wait 2416038 00:21:05.493 03:53:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:05.493 03:53:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:05.493 03:53:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:05.493 03:53:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.493 03:53:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:05.493 03:53:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.493 03:53:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.493 03:53:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.032 03:53:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:08.032 03:53:26 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:08.032 00:21:08.032 real 0m17.899s 00:21:08.032 user 0m22.192s 00:21:08.032 sys 0m7.054s 00:21:08.032 03:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.032 03:53:26 -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 ************************************ 00:21:08.032 END TEST nvmf_fips 00:21:08.032 ************************************ 00:21:08.032 03:53:26 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:08.032 03:53:26 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:08.032 03:53:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:08.032 03:53:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:08.032 03:53:26 -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 ************************************ 00:21:08.032 START TEST nvmf_fuzz 00:21:08.032 ************************************ 00:21:08.032 03:53:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:08.032 * Looking for test storage... 00:21:08.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.032 03:53:26 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.032 03:53:26 -- nvmf/common.sh@7 -- # uname -s 00:21:08.032 03:53:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.032 03:53:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.032 03:53:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.032 03:53:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.032 03:53:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.032 03:53:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.032 03:53:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.032 03:53:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.032 03:53:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.032 03:53:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.032 03:53:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.032 03:53:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.032 03:53:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.032 03:53:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.032 03:53:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.032 03:53:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.032 03:53:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.032 03:53:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.032 03:53:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.032 03:53:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.032 03:53:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.032 03:53:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.032 03:53:26 -- paths/export.sh@5 -- # export PATH 00:21:08.032 03:53:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.032 03:53:26 -- nvmf/common.sh@46 -- # : 0 00:21:08.032 03:53:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:08.032 03:53:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:08.032 03:53:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:08.032 03:53:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.032 03:53:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.032 03:53:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:08.032 03:53:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:08.032 03:53:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:08.032 03:53:26 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:08.032 03:53:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:08.032 03:53:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.032 03:53:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:08.032 03:53:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:08.032 03:53:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:08.032 03:53:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.032 03:53:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.032 03:53:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.032 03:53:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:08.032 03:53:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:08.032 03:53:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:08.032 03:53:26 -- common/autotest_common.sh@10 -- # set +x 00:21:09.937 03:53:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:09.937 03:53:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:09.937 03:53:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:09.937 03:53:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:09.937 03:53:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:09.937 03:53:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:09.937 03:53:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:09.937 03:53:28 -- nvmf/common.sh@294 -- # net_devs=() 00:21:09.937 03:53:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:09.937 03:53:28 -- nvmf/common.sh@295 -- # e810=() 00:21:09.937 03:53:28 -- nvmf/common.sh@295 -- # local -ga e810 00:21:09.937 03:53:28 -- nvmf/common.sh@296 -- # x722=() 00:21:09.937 03:53:28 -- nvmf/common.sh@296 -- # local -ga x722 00:21:09.937 03:53:28 -- nvmf/common.sh@297 -- # mlx=() 00:21:09.937 03:53:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:09.937 03:53:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.937 03:53:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.938 03:53:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:09.938 03:53:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:09.938 03:53:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:09.938 03:53:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:09.938 03:53:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:09.938 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:09.938 03:53:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:09.938 03:53:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:09.938 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:09.938 03:53:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:09.938 03:53:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:09.938 03:53:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.938 03:53:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:09.938 03:53:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.938 03:53:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:09.938 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:09.938 03:53:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.938 03:53:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:09.938 03:53:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.938 03:53:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:09.938 03:53:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.938 03:53:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:09.938 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:09.938 03:53:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.938 03:53:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:09.938 03:53:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:09.938 03:53:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:09.938 03:53:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.938 03:53:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.938 03:53:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.938 03:53:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:09.938 03:53:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.938 03:53:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.938 03:53:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:09.938 03:53:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.938 03:53:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.938 03:53:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:09.938 03:53:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:09.938 03:53:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.938 03:53:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.938 03:53:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.938 03:53:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.938 03:53:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:09.938 03:53:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.938 03:53:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.938 03:53:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.938 03:53:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:09.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:21:09.938 00:21:09.938 --- 10.0.0.2 ping statistics --- 00:21:09.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.938 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:09.938 03:53:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:09.938 00:21:09.938 --- 10.0.0.1 ping statistics --- 00:21:09.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.938 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:09.938 03:53:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.938 03:53:28 -- nvmf/common.sh@410 -- # return 0 00:21:09.938 03:53:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:09.938 03:53:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.938 03:53:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:09.938 03:53:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.938 03:53:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:09.938 03:53:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:09.938 03:53:28 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2419626 00:21:09.938 03:53:28 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:09.938 03:53:28 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:09.938 03:53:28 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2419626 00:21:09.938 03:53:28 -- common/autotest_common.sh@819 -- # '[' -z 2419626 ']' 00:21:09.938 03:53:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.938 03:53:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:09.938 03:53:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.938 03:53:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:09.938 03:53:28 -- common/autotest_common.sh@10 -- # set +x 00:21:10.876 03:53:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.876 03:53:29 -- common/autotest_common.sh@852 -- # return 0 00:21:10.876 03:53:29 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.876 03:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.876 03:53:29 -- common/autotest_common.sh@10 -- # set +x 00:21:10.876 03:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.876 03:53:29 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:10.876 03:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.876 03:53:29 -- common/autotest_common.sh@10 -- # set +x 00:21:10.876 Malloc0 00:21:10.876 03:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.876 03:53:29 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.876 03:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.876 03:53:29 -- common/autotest_common.sh@10 -- # set +x 00:21:10.876 03:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.876 03:53:29 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.876 03:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.876 03:53:29 -- common/autotest_common.sh@10 -- # set +x 00:21:10.876 03:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.876 03:53:29 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.876 03:53:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.876 03:53:29 -- common/autotest_common.sh@10 -- # set +x 00:21:10.876 03:53:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.876 03:53:29 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:10.876 03:53:29 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:42.983 Fuzzing completed. Shutting down the fuzz application 00:21:42.983 00:21:42.983 Dumping successful admin opcodes: 00:21:42.983 8, 9, 10, 24, 00:21:42.983 Dumping successful io opcodes: 00:21:42.983 0, 9, 00:21:42.983 NS: 0x200003aeff00 I/O qp, Total commands completed: 445844, total successful commands: 2587, random_seed: 928857920 00:21:42.983 NS: 0x200003aeff00 admin qp, Total commands completed: 55776, total successful commands: 444, random_seed: 487365120 00:21:42.983 03:54:00 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:42.983 Fuzzing completed. Shutting down the fuzz application 00:21:42.983 00:21:42.983 Dumping successful admin opcodes: 00:21:42.983 24, 00:21:42.983 Dumping successful io opcodes: 00:21:42.983 00:21:42.983 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3845564818 00:21:42.983 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3845679466 00:21:42.983 03:54:01 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.983 03:54:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.983 03:54:01 -- common/autotest_common.sh@10 -- # set +x 00:21:42.983 03:54:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.983 03:54:01 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:42.983 03:54:01 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:42.983 03:54:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:42.983 03:54:01 -- nvmf/common.sh@116 -- # sync 00:21:42.983 03:54:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:42.983 03:54:01 -- nvmf/common.sh@119 -- # set +e 00:21:42.983 03:54:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:42.983 03:54:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:42.983 rmmod nvme_tcp 00:21:42.983 rmmod nvme_fabrics 00:21:42.983 rmmod nvme_keyring 00:21:42.983 03:54:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:42.983 03:54:01 -- nvmf/common.sh@123 -- # set -e 00:21:42.983 03:54:01 -- nvmf/common.sh@124 -- # return 0 00:21:42.983 03:54:01 -- nvmf/common.sh@477 -- # '[' -n 2419626 ']' 00:21:42.983 03:54:01 -- nvmf/common.sh@478 -- # killprocess 2419626 00:21:42.983 03:54:01 -- common/autotest_common.sh@926 -- # '[' -z 2419626 ']' 00:21:42.983 03:54:01 -- common/autotest_common.sh@930 -- # kill -0 2419626 00:21:42.983 03:54:01 -- common/autotest_common.sh@931 -- # uname 00:21:42.983 03:54:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:42.983 03:54:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2419626 00:21:42.983 03:54:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:42.983 03:54:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:42.983 03:54:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2419626' 00:21:42.983 killing process with pid 2419626 00:21:42.983 03:54:01 -- common/autotest_common.sh@945 -- # kill 2419626 00:21:42.983 03:54:01 -- common/autotest_common.sh@950 -- # wait 2419626 00:21:42.983 03:54:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:42.983 03:54:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:42.983 03:54:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:42.983 03:54:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.983 03:54:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:42.983 03:54:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.983 03:54:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.983 03:54:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.888 03:54:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:44.888 03:54:03 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:45.146 00:21:45.146 real 0m37.359s 00:21:45.146 user 0m51.080s 00:21:45.146 sys 0m15.568s 00:21:45.146 03:54:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.146 03:54:03 -- common/autotest_common.sh@10 -- # set +x 00:21:45.146 ************************************ 00:21:45.146 END TEST nvmf_fuzz 00:21:45.146 ************************************ 00:21:45.146 03:54:03 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:45.146 03:54:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:45.146 03:54:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:45.146 03:54:03 -- common/autotest_common.sh@10 -- # set +x 00:21:45.146 ************************************ 00:21:45.146 START TEST nvmf_multiconnection 00:21:45.146 ************************************ 00:21:45.146 03:54:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:45.146 * Looking for test storage... 00:21:45.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.146 03:54:03 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.146 03:54:03 -- nvmf/common.sh@7 -- # uname -s 00:21:45.146 03:54:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.146 03:54:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.146 03:54:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.146 03:54:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.146 03:54:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.146 03:54:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.146 03:54:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.146 03:54:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.146 03:54:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.146 03:54:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.146 03:54:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.146 03:54:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.146 03:54:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.146 03:54:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.146 03:54:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.146 03:54:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.146 03:54:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.146 03:54:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.146 03:54:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.146 03:54:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.146 03:54:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.146 03:54:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.146 03:54:03 -- paths/export.sh@5 -- # export PATH 00:21:45.146 03:54:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.146 03:54:03 -- nvmf/common.sh@46 -- # : 0 00:21:45.146 03:54:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:45.146 03:54:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:45.146 03:54:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:45.146 03:54:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.146 03:54:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.146 03:54:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:45.146 03:54:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:45.146 03:54:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:45.146 03:54:03 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.146 03:54:03 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.146 03:54:03 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:45.147 03:54:03 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:45.147 03:54:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:45.147 03:54:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.147 03:54:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:45.147 03:54:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:45.147 03:54:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:45.147 03:54:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.147 03:54:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.147 03:54:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.147 03:54:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:45.147 03:54:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:45.147 03:54:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:45.147 03:54:03 -- common/autotest_common.sh@10 -- # set +x 00:21:47.054 03:54:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.054 03:54:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:47.054 03:54:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:47.054 03:54:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:47.054 03:54:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:47.054 03:54:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:47.054 03:54:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:47.054 03:54:05 -- nvmf/common.sh@294 -- # net_devs=() 00:21:47.054 03:54:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:47.054 03:54:05 -- nvmf/common.sh@295 -- # e810=() 00:21:47.054 03:54:05 -- nvmf/common.sh@295 -- # local -ga e810 00:21:47.054 03:54:05 -- nvmf/common.sh@296 -- # x722=() 00:21:47.054 03:54:05 -- nvmf/common.sh@296 -- # local -ga x722 00:21:47.054 03:54:05 -- nvmf/common.sh@297 -- # mlx=() 00:21:47.054 03:54:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:47.054 03:54:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.054 03:54:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:47.054 03:54:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:47.054 03:54:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:47.054 03:54:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.054 03:54:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:47.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:47.054 03:54:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.054 03:54:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:47.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:47.054 03:54:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:47.054 03:54:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.054 03:54:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.054 03:54:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.054 03:54:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.054 03:54:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:47.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:47.054 03:54:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.054 03:54:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.054 03:54:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.054 03:54:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.054 03:54:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.054 03:54:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:47.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:47.054 03:54:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.054 03:54:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:47.054 03:54:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:47.054 03:54:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:47.054 03:54:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:47.054 03:54:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.054 03:54:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.054 03:54:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.054 03:54:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:47.054 03:54:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.054 03:54:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.054 03:54:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:47.054 03:54:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.054 03:54:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.054 03:54:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:47.054 03:54:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:47.054 03:54:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.054 03:54:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.054 03:54:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.054 03:54:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.054 03:54:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:47.054 03:54:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.054 03:54:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.054 03:54:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.054 03:54:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:47.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:21:47.054 00:21:47.054 --- 10.0.0.2 ping statistics --- 00:21:47.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.054 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:47.054 03:54:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:21:47.055 00:21:47.055 --- 10.0.0.1 ping statistics --- 00:21:47.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.055 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:47.055 03:54:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.055 03:54:05 -- nvmf/common.sh@410 -- # return 0 00:21:47.055 03:54:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:47.055 03:54:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.055 03:54:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:47.055 03:54:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:47.055 03:54:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.055 03:54:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:47.055 03:54:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:47.055 03:54:05 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:47.055 03:54:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:47.055 03:54:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:47.055 03:54:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.055 03:54:05 -- nvmf/common.sh@469 -- # nvmfpid=2425615 00:21:47.055 03:54:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:47.055 03:54:05 -- nvmf/common.sh@470 -- # waitforlisten 2425615 00:21:47.055 03:54:05 -- common/autotest_common.sh@819 -- # '[' -z 2425615 ']' 00:21:47.055 03:54:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.055 03:54:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.055 03:54:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.055 03:54:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.055 03:54:05 -- common/autotest_common.sh@10 -- # set +x 00:21:47.315 [2024-07-14 03:54:06.028801] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:47.315 [2024-07-14 03:54:06.028907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.315 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.315 [2024-07-14 03:54:06.099808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.315 [2024-07-14 03:54:06.190042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:47.315 [2024-07-14 03:54:06.190228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.315 [2024-07-14 03:54:06.190248] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.315 [2024-07-14 03:54:06.190267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.315 [2024-07-14 03:54:06.190339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.315 [2024-07-14 03:54:06.190409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.315 [2024-07-14 03:54:06.190500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.315 [2024-07-14 03:54:06.190502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.250 03:54:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.250 03:54:06 -- common/autotest_common.sh@852 -- # return 0 00:21:48.250 03:54:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:48.250 03:54:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:48.250 03:54:06 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.250 03:54:06 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.250 03:54:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:06 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 [2024-07-14 03:54:06.994411] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:48.250 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.250 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 Malloc1 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 [2024-07-14 03:54:07.049402] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.250 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 Malloc2 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.250 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 Malloc3 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.250 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 Malloc4 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.250 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.250 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:48.250 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.250 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.509 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 Malloc5 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.509 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 Malloc6 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.509 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 Malloc7 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.509 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 Malloc8 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.509 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.509 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.509 Malloc9 00:21:48.509 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.509 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:48.509 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.510 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.510 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.510 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:48.510 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.510 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.510 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.510 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:48.510 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.510 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.768 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 Malloc10 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.768 03:54:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 Malloc11 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:48.768 03:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.768 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:21:48.768 03:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.768 03:54:07 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:48.768 03:54:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.768 03:54:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:49.336 03:54:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:49.336 03:54:08 -- common/autotest_common.sh@1177 -- # local i=0 00:21:49.336 03:54:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:49.336 03:54:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:49.336 03:54:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:51.240 03:54:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:51.240 03:54:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:51.240 03:54:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:51.240 03:54:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:51.240 03:54:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:51.240 03:54:10 -- common/autotest_common.sh@1187 -- # return 0 00:21:51.240 03:54:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.240 03:54:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:52.180 03:54:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:52.180 03:54:10 -- common/autotest_common.sh@1177 -- # local i=0 00:21:52.180 03:54:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:52.180 03:54:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:52.180 03:54:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:54.085 03:54:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:54.085 03:54:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:54.085 03:54:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:54.085 03:54:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:54.085 03:54:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:54.085 03:54:12 -- common/autotest_common.sh@1187 -- # return 0 00:21:54.085 03:54:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:54.085 03:54:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:55.019 03:54:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:55.019 03:54:13 -- common/autotest_common.sh@1177 -- # local i=0 00:21:55.019 03:54:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:55.019 03:54:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:55.019 03:54:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:56.955 03:54:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:56.955 03:54:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:56.955 03:54:15 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:56.955 03:54:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:56.955 03:54:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:56.955 03:54:15 -- common/autotest_common.sh@1187 -- # return 0 00:21:56.955 03:54:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.955 03:54:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:57.521 03:54:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:57.521 03:54:16 -- common/autotest_common.sh@1177 -- # local i=0 00:21:57.522 03:54:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:57.522 03:54:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:57.522 03:54:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:00.058 03:54:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:00.058 03:54:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:00.058 03:54:18 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:00.058 03:54:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:00.058 03:54:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:00.058 03:54:18 -- common/autotest_common.sh@1187 -- # return 0 00:22:00.058 03:54:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.058 03:54:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:00.318 03:54:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:00.318 03:54:19 -- common/autotest_common.sh@1177 -- # local i=0 00:22:00.318 03:54:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:00.318 03:54:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:00.318 03:54:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:02.222 03:54:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:02.222 03:54:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:02.222 03:54:21 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:02.222 03:54:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:02.222 03:54:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:02.222 03:54:21 -- common/autotest_common.sh@1187 -- # return 0 00:22:02.222 03:54:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.222 03:54:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:03.159 03:54:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:03.159 03:54:21 -- common/autotest_common.sh@1177 -- # local i=0 00:22:03.159 03:54:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:03.159 03:54:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:03.159 03:54:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:05.065 03:54:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:05.065 03:54:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:05.065 03:54:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:05.065 03:54:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:05.065 03:54:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:05.065 03:54:23 -- common/autotest_common.sh@1187 -- # return 0 00:22:05.065 03:54:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:05.065 03:54:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:05.999 03:54:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:05.999 03:54:24 -- common/autotest_common.sh@1177 -- # local i=0 00:22:05.999 03:54:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.999 03:54:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:05.999 03:54:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:07.906 03:54:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:07.906 03:54:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:07.906 03:54:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:07.906 03:54:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:07.906 03:54:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:07.906 03:54:26 -- common/autotest_common.sh@1187 -- # return 0 00:22:07.906 03:54:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:07.906 03:54:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:08.843 03:54:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:08.844 03:54:27 -- common/autotest_common.sh@1177 -- # local i=0 00:22:08.844 03:54:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.844 03:54:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:08.844 03:54:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:10.789 03:54:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:10.789 03:54:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:10.789 03:54:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:10.789 03:54:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:10.789 03:54:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.789 03:54:29 -- common/autotest_common.sh@1187 -- # return 0 00:22:10.789 03:54:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.789 03:54:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:11.725 03:54:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:11.725 03:54:30 -- common/autotest_common.sh@1177 -- # local i=0 00:22:11.725 03:54:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:11.725 03:54:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:11.725 03:54:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:13.631 03:54:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:13.631 03:54:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:13.631 03:54:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:13.631 03:54:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:13.631 03:54:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:13.631 03:54:32 -- common/autotest_common.sh@1187 -- # return 0 00:22:13.631 03:54:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.631 03:54:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:14.565 03:54:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:14.565 03:54:33 -- common/autotest_common.sh@1177 -- # local i=0 00:22:14.565 03:54:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:14.565 03:54:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:14.565 03:54:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:16.516 03:54:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:16.516 03:54:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:16.516 03:54:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:16.516 03:54:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:16.516 03:54:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:16.516 03:54:35 -- common/autotest_common.sh@1187 -- # return 0 00:22:16.516 03:54:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:16.516 03:54:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:17.464 03:54:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:17.464 03:54:36 -- common/autotest_common.sh@1177 -- # local i=0 00:22:17.464 03:54:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:17.464 03:54:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:17.464 03:54:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:19.997 03:54:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:19.997 03:54:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:19.997 03:54:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:19.997 03:54:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:19.997 03:54:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:19.997 03:54:38 -- common/autotest_common.sh@1187 -- # return 0 00:22:19.997 03:54:38 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:19.997 [global] 00:22:19.997 thread=1 00:22:19.997 invalidate=1 00:22:19.997 rw=read 00:22:19.997 time_based=1 00:22:19.997 runtime=10 00:22:19.997 ioengine=libaio 00:22:19.997 direct=1 00:22:19.997 bs=262144 00:22:19.997 iodepth=64 00:22:19.997 norandommap=1 00:22:19.997 numjobs=1 00:22:19.997 00:22:19.997 [job0] 00:22:19.997 filename=/dev/nvme0n1 00:22:19.997 [job1] 00:22:19.997 filename=/dev/nvme10n1 00:22:19.997 [job2] 00:22:19.997 filename=/dev/nvme1n1 00:22:19.997 [job3] 00:22:19.997 filename=/dev/nvme2n1 00:22:19.997 [job4] 00:22:19.997 filename=/dev/nvme3n1 00:22:19.997 [job5] 00:22:19.997 filename=/dev/nvme4n1 00:22:19.997 [job6] 00:22:19.997 filename=/dev/nvme5n1 00:22:19.997 [job7] 00:22:19.997 filename=/dev/nvme6n1 00:22:19.997 [job8] 00:22:19.997 filename=/dev/nvme7n1 00:22:19.997 [job9] 00:22:19.997 filename=/dev/nvme8n1 00:22:19.997 [job10] 00:22:19.997 filename=/dev/nvme9n1 00:22:19.997 Could not set queue depth (nvme0n1) 00:22:19.997 Could not set queue depth (nvme10n1) 00:22:19.997 Could not set queue depth (nvme1n1) 00:22:19.997 Could not set queue depth (nvme2n1) 00:22:19.997 Could not set queue depth (nvme3n1) 00:22:19.997 Could not set queue depth (nvme4n1) 00:22:19.997 Could not set queue depth (nvme5n1) 00:22:19.997 Could not set queue depth (nvme6n1) 00:22:19.997 Could not set queue depth (nvme7n1) 00:22:19.997 Could not set queue depth (nvme8n1) 00:22:19.997 Could not set queue depth (nvme9n1) 00:22:19.997 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.997 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.997 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.997 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.997 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.997 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.997 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.998 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.998 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.998 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.998 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:19.998 fio-3.35 00:22:19.998 Starting 11 threads 00:22:32.209 00:22:32.209 job0: (groupid=0, jobs=1): err= 0: pid=2430646: Sun Jul 14 03:54:49 2024 00:22:32.209 read: IOPS=613, BW=153MiB/s (161MB/s)(1542MiB/10055msec) 00:22:32.209 slat (usec): min=9, max=126885, avg=984.85, stdev=4692.34 00:22:32.209 clat (usec): min=1385, max=533006, avg=103294.76, stdev=71125.49 00:22:32.209 lat (usec): min=1442, max=533020, avg=104279.61, stdev=71592.18 00:22:32.209 clat percentiles (msec): 00:22:32.209 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 44], 00:22:32.209 | 30.00th=[ 70], 40.00th=[ 87], 50.00th=[ 99], 60.00th=[ 110], 00:22:32.209 | 70.00th=[ 123], 80.00th=[ 144], 90.00th=[ 174], 95.00th=[ 236], 00:22:32.209 | 99.00th=[ 376], 99.50th=[ 418], 99.90th=[ 535], 99.95th=[ 535], 00:22:32.209 | 99.99th=[ 535] 00:22:32.209 bw ( KiB/s): min=99328, max=229888, per=10.09%, avg=156241.10, stdev=39905.50, samples=20 00:22:32.209 iops : min= 388, max= 898, avg=610.30, stdev=155.86, samples=20 00:22:32.209 lat (msec) : 2=0.02%, 4=0.23%, 10=3.49%, 20=6.66%, 50=12.66% 00:22:32.209 lat (msec) : 100=28.73%, 250=44.56%, 500=3.24%, 750=0.41% 00:22:32.209 cpu : usr=0.24%, sys=1.94%, ctx=1605, majf=0, minf=4097 00:22:32.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:32.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.209 issued rwts: total=6167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.209 job1: (groupid=0, jobs=1): err= 0: pid=2430647: Sun Jul 14 03:54:49 2024 00:22:32.209 read: IOPS=543, BW=136MiB/s (142MB/s)(1376MiB/10128msec) 00:22:32.209 slat (usec): min=9, max=206890, avg=1271.86, stdev=6908.88 00:22:32.209 clat (msec): min=2, max=647, avg=116.39, stdev=106.34 00:22:32.209 lat (msec): min=2, max=647, avg=117.66, stdev=107.21 00:22:32.209 clat percentiles (msec): 00:22:32.209 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 43], 00:22:32.209 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 82], 60.00th=[ 109], 00:22:32.209 | 70.00th=[ 129], 80.00th=[ 155], 90.00th=[ 251], 95.00th=[ 355], 00:22:32.209 | 99.00th=[ 550], 99.50th=[ 584], 99.90th=[ 634], 99.95th=[ 634], 00:22:32.209 | 99.99th=[ 651] 00:22:32.209 bw ( KiB/s): min=37376, max=312320, per=8.99%, avg=139282.55, stdev=82228.61, samples=20 00:22:32.209 iops : min= 146, max= 1220, avg=544.00, stdev=321.22, samples=20 00:22:32.209 lat (msec) : 4=0.33%, 10=1.09%, 20=3.80%, 50=18.87%, 100=31.86% 00:22:32.209 lat (msec) : 250=34.02%, 500=8.85%, 750=1.18% 00:22:32.209 cpu : usr=0.35%, sys=1.44%, ctx=1358, majf=0, minf=4097 00:22:32.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:32.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.209 issued rwts: total=5505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.209 job2: (groupid=0, jobs=1): err= 0: pid=2430648: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=615, BW=154MiB/s (161MB/s)(1556MiB/10119msec) 00:22:32.210 slat (usec): min=8, max=341658, avg=884.21, stdev=8692.25 00:22:32.210 clat (usec): min=1138, max=749924, avg=103090.15, stdev=102300.20 00:22:32.210 lat (usec): min=1167, max=749940, avg=103974.36, stdev=103662.29 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 19], 20.00th=[ 33], 00:22:32.210 | 30.00th=[ 40], 40.00th=[ 51], 50.00th=[ 65], 60.00th=[ 87], 00:22:32.210 | 70.00th=[ 117], 80.00th=[ 157], 90.00th=[ 241], 95.00th=[ 347], 00:22:32.210 | 99.00th=[ 485], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 542], 00:22:32.210 | 99.99th=[ 751] 00:22:32.210 bw ( KiB/s): min=31744, max=340480, per=10.18%, avg=157681.45, stdev=89676.70, samples=20 00:22:32.210 iops : min= 124, max= 1330, avg=615.90, stdev=350.31, samples=20 00:22:32.210 lat (msec) : 2=0.14%, 4=0.48%, 10=3.07%, 20=7.44%, 50=29.00% 00:22:32.210 lat (msec) : 100=25.47%, 250=25.35%, 500=8.37%, 750=0.67% 00:22:32.210 cpu : usr=0.24%, sys=1.76%, ctx=1726, majf=0, minf=4097 00:22:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.210 issued rwts: total=6224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.210 job3: (groupid=0, jobs=1): err= 0: pid=2430649: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=460, BW=115MiB/s (121MB/s)(1159MiB/10056msec) 00:22:32.210 slat (usec): min=8, max=322834, avg=1253.54, stdev=8489.45 00:22:32.210 clat (usec): min=1894, max=572333, avg=137509.83, stdev=100958.56 00:22:32.210 lat (usec): min=1923, max=603537, avg=138763.38, stdev=102216.50 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 33], 20.00th=[ 51], 00:22:32.210 | 30.00th=[ 74], 40.00th=[ 105], 50.00th=[ 131], 60.00th=[ 144], 00:22:32.210 | 70.00th=[ 165], 80.00th=[ 184], 90.00th=[ 257], 95.00th=[ 359], 00:22:32.210 | 99.00th=[ 485], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:22:32.210 | 99.99th=[ 575] 00:22:32.210 bw ( KiB/s): min=35840, max=204288, per=7.56%, avg=117031.70, stdev=42218.68, samples=20 00:22:32.210 iops : min= 140, max= 798, avg=457.15, stdev=164.92, samples=20 00:22:32.210 lat (msec) : 2=0.06%, 4=0.17%, 10=2.57%, 20=2.98%, 50=14.02% 00:22:32.210 lat (msec) : 100=18.86%, 250=51.07%, 500=9.32%, 750=0.95% 00:22:32.210 cpu : usr=0.27%, sys=1.40%, ctx=1274, majf=0, minf=4097 00:22:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.210 issued rwts: total=4635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.210 job4: (groupid=0, jobs=1): err= 0: pid=2430650: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=648, BW=162MiB/s (170MB/s)(1631MiB/10059msec) 00:22:32.210 slat (usec): min=10, max=195448, avg=1419.62, stdev=5810.59 00:22:32.210 clat (usec): min=1145, max=523158, avg=97207.42, stdev=64095.76 00:22:32.210 lat (usec): min=1200, max=523226, avg=98627.04, stdev=65063.28 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 45], 00:22:32.210 | 30.00th=[ 58], 40.00th=[ 72], 50.00th=[ 88], 60.00th=[ 103], 00:22:32.210 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 197], 00:22:32.210 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 456], 99.95th=[ 456], 00:22:32.210 | 99.99th=[ 523] 00:22:32.210 bw ( KiB/s): min=39424, max=348344, per=10.68%, avg=165392.70, stdev=73928.37, samples=20 00:22:32.210 iops : min= 154, max= 1360, avg=646.00, stdev=288.68, samples=20 00:22:32.210 lat (msec) : 2=0.21%, 4=0.20%, 10=1.17%, 20=2.56%, 50=21.28% 00:22:32.210 lat (msec) : 100=32.73%, 250=38.43%, 500=3.40%, 750=0.02% 00:22:32.210 cpu : usr=0.30%, sys=2.05%, ctx=1431, majf=0, minf=4097 00:22:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.210 issued rwts: total=6523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.210 job5: (groupid=0, jobs=1): err= 0: pid=2430651: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=479, BW=120MiB/s (126MB/s)(1206MiB/10058msec) 00:22:32.210 slat (usec): min=10, max=218948, avg=1977.01, stdev=7663.30 00:22:32.210 clat (msec): min=9, max=430, avg=131.42, stdev=64.93 00:22:32.210 lat (msec): min=9, max=482, avg=133.40, stdev=65.99 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 29], 5.00th=[ 54], 10.00th=[ 65], 20.00th=[ 83], 00:22:32.210 | 30.00th=[ 95], 40.00th=[ 108], 50.00th=[ 124], 60.00th=[ 133], 00:22:32.210 | 70.00th=[ 146], 80.00th=[ 165], 90.00th=[ 209], 95.00th=[ 257], 00:22:32.210 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 430], 00:22:32.210 | 99.99th=[ 430] 00:22:32.210 bw ( KiB/s): min=45568, max=221184, per=7.86%, avg=121813.65, stdev=51522.20, samples=20 00:22:32.210 iops : min= 178, max= 864, avg=475.80, stdev=201.23, samples=20 00:22:32.210 lat (msec) : 10=0.06%, 20=0.29%, 50=3.44%, 100=30.75%, 250=59.52% 00:22:32.210 lat (msec) : 500=5.93% 00:22:32.210 cpu : usr=0.27%, sys=1.74%, ctx=1027, majf=0, minf=4097 00:22:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.210 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.210 job6: (groupid=0, jobs=1): err= 0: pid=2430652: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=476, BW=119MiB/s (125MB/s)(1199MiB/10056msec) 00:22:32.210 slat (usec): min=9, max=238247, avg=1220.20, stdev=8259.37 00:22:32.210 clat (usec): min=1045, max=607542, avg=132910.39, stdev=110966.22 00:22:32.210 lat (usec): min=1100, max=659491, avg=134130.58, stdev=112423.84 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 32], 00:22:32.210 | 30.00th=[ 56], 40.00th=[ 96], 50.00th=[ 124], 60.00th=[ 138], 00:22:32.210 | 70.00th=[ 161], 80.00th=[ 194], 90.00th=[ 262], 95.00th=[ 363], 00:22:32.210 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 575], 99.95th=[ 600], 00:22:32.210 | 99.99th=[ 609] 00:22:32.210 bw ( KiB/s): min=32256, max=247296, per=7.82%, avg=121136.95, stdev=51713.15, samples=20 00:22:32.210 iops : min= 126, max= 966, avg=473.15, stdev=201.98, samples=20 00:22:32.210 lat (msec) : 2=0.27%, 4=0.77%, 10=4.36%, 20=9.64%, 50=13.39% 00:22:32.210 lat (msec) : 100=12.05%, 250=48.07%, 500=9.61%, 750=1.84% 00:22:32.210 cpu : usr=0.22%, sys=1.37%, ctx=1407, majf=0, minf=4097 00:22:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.210 issued rwts: total=4795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.210 job7: (groupid=0, jobs=1): err= 0: pid=2430653: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=665, BW=166MiB/s (174MB/s)(1685MiB/10126msec) 00:22:32.210 slat (usec): min=8, max=211047, avg=854.93, stdev=7169.31 00:22:32.210 clat (usec): min=1094, max=553377, avg=95251.11, stdev=95699.83 00:22:32.210 lat (usec): min=1124, max=557169, avg=96106.04, stdev=96861.66 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 26], 00:22:32.210 | 30.00th=[ 36], 40.00th=[ 46], 50.00th=[ 58], 60.00th=[ 79], 00:22:32.210 | 70.00th=[ 118], 80.00th=[ 148], 90.00th=[ 228], 95.00th=[ 305], 00:22:32.210 | 99.00th=[ 430], 99.50th=[ 489], 99.90th=[ 542], 99.95th=[ 542], 00:22:32.210 | 99.99th=[ 550] 00:22:32.210 bw ( KiB/s): min=32256, max=393216, per=11.03%, avg=170873.15, stdev=98889.61, samples=20 00:22:32.210 iops : min= 126, max= 1536, avg=667.40, stdev=386.32, samples=20 00:22:32.210 lat (msec) : 2=0.19%, 4=1.51%, 10=7.14%, 20=7.11%, 50=28.84% 00:22:32.210 lat (msec) : 100=19.49%, 250=27.62%, 500=7.75%, 750=0.36% 00:22:32.210 cpu : usr=0.22%, sys=1.74%, ctx=1752, majf=0, minf=4097 00:22:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.210 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.210 job8: (groupid=0, jobs=1): err= 0: pid=2430654: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=615, BW=154MiB/s (161MB/s)(1559MiB/10127msec) 00:22:32.210 slat (usec): min=9, max=259003, avg=895.87, stdev=7022.55 00:22:32.210 clat (usec): min=1400, max=557227, avg=102945.36, stdev=100909.37 00:22:32.210 lat (usec): min=1455, max=685965, avg=103841.24, stdev=102108.74 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 29], 00:22:32.210 | 30.00th=[ 36], 40.00th=[ 47], 50.00th=[ 63], 60.00th=[ 92], 00:22:32.210 | 70.00th=[ 129], 80.00th=[ 176], 90.00th=[ 230], 95.00th=[ 334], 00:22:32.210 | 99.00th=[ 447], 99.50th=[ 472], 99.90th=[ 550], 99.95th=[ 558], 00:22:32.210 | 99.99th=[ 558] 00:22:32.210 bw ( KiB/s): min=34304, max=324608, per=10.20%, avg=158040.75, stdev=85218.22, samples=20 00:22:32.210 iops : min= 134, max= 1268, avg=617.25, stdev=332.84, samples=20 00:22:32.210 lat (msec) : 2=0.03%, 4=1.70%, 10=5.03%, 20=6.62%, 50=30.67% 00:22:32.210 lat (msec) : 100=18.79%, 250=28.43%, 500=8.43%, 750=0.29% 00:22:32.210 cpu : usr=0.24%, sys=1.65%, ctx=1778, majf=0, minf=4097 00:22:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.210 issued rwts: total=6237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.210 job9: (groupid=0, jobs=1): err= 0: pid=2430655: Sun Jul 14 03:54:49 2024 00:22:32.210 read: IOPS=453, BW=113MiB/s (119MB/s)(1147MiB/10124msec) 00:22:32.210 slat (usec): min=9, max=172571, avg=1256.91, stdev=7605.85 00:22:32.210 clat (usec): min=1185, max=613934, avg=139883.39, stdev=124392.78 00:22:32.210 lat (usec): min=1239, max=613979, avg=141140.30, stdev=125639.01 00:22:32.210 clat percentiles (msec): 00:22:32.210 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 37], 00:22:32.210 | 30.00th=[ 45], 40.00th=[ 82], 50.00th=[ 112], 60.00th=[ 140], 00:22:32.210 | 70.00th=[ 178], 80.00th=[ 218], 90.00th=[ 313], 95.00th=[ 439], 00:22:32.210 | 99.00th=[ 514], 99.50th=[ 550], 99.90th=[ 567], 99.95th=[ 592], 00:22:32.210 | 99.99th=[ 617] 00:22:32.210 bw ( KiB/s): min=39424, max=305664, per=7.48%, avg=115792.55, stdev=74897.59, samples=20 00:22:32.210 iops : min= 154, max= 1194, avg=452.30, stdev=292.55, samples=20 00:22:32.210 lat (msec) : 2=0.20%, 4=1.31%, 10=2.94%, 20=6.26%, 50=22.91% 00:22:32.210 lat (msec) : 100=11.58%, 250=39.61%, 500=13.45%, 750=1.74% 00:22:32.211 cpu : usr=0.23%, sys=1.21%, ctx=1283, majf=0, minf=4097 00:22:32.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:32.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.211 issued rwts: total=4587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.211 job10: (groupid=0, jobs=1): err= 0: pid=2430656: Sun Jul 14 03:54:49 2024 00:22:32.211 read: IOPS=498, BW=125MiB/s (131MB/s)(1261MiB/10128msec) 00:22:32.211 slat (usec): min=10, max=220917, avg=1895.40, stdev=8406.73 00:22:32.211 clat (msec): min=5, max=606, avg=126.50, stdev=101.56 00:22:32.211 lat (msec): min=5, max=606, avg=128.40, stdev=102.97 00:22:32.211 clat percentiles (msec): 00:22:32.211 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 50], 00:22:32.211 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 104], 60.00th=[ 124], 00:22:32.211 | 70.00th=[ 142], 80.00th=[ 165], 90.00th=[ 234], 95.00th=[ 368], 00:22:32.211 | 99.00th=[ 535], 99.50th=[ 558], 99.90th=[ 575], 99.95th=[ 575], 00:22:32.211 | 99.99th=[ 609] 00:22:32.211 bw ( KiB/s): min=34816, max=359424, per=8.23%, avg=127473.10, stdev=83690.13, samples=20 00:22:32.211 iops : min= 136, max= 1404, avg=497.90, stdev=326.90, samples=20 00:22:32.211 lat (msec) : 10=0.24%, 20=1.76%, 50=18.74%, 100=27.54%, 250=42.55% 00:22:32.211 lat (msec) : 500=7.53%, 750=1.65% 00:22:32.211 cpu : usr=0.21%, sys=1.82%, ctx=1054, majf=0, minf=3722 00:22:32.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:32.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.211 issued rwts: total=5044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.211 00:22:32.211 Run status group 0 (all jobs): 00:22:32.211 READ: bw=1513MiB/s (1586MB/s), 113MiB/s-166MiB/s (119MB/s-174MB/s), io=15.0GiB (16.1GB), run=10055-10128msec 00:22:32.211 00:22:32.211 Disk stats (read/write): 00:22:32.211 nvme0n1: ios=12060/0, merge=0/0, ticks=1237890/0, in_queue=1237890, util=96.95% 00:22:32.211 nvme10n1: ios=10860/0, merge=0/0, ticks=1234587/0, in_queue=1234587, util=97.17% 00:22:32.211 nvme1n1: ios=12251/0, merge=0/0, ticks=1233736/0, in_queue=1233736, util=97.48% 00:22:32.211 nvme2n1: ios=9033/0, merge=0/0, ticks=1237573/0, in_queue=1237573, util=97.65% 00:22:32.211 nvme3n1: ios=12765/0, merge=0/0, ticks=1227996/0, in_queue=1227996, util=97.73% 00:22:32.211 nvme4n1: ios=9382/0, merge=0/0, ticks=1227034/0, in_queue=1227034, util=98.10% 00:22:32.211 nvme5n1: ios=9355/0, merge=0/0, ticks=1234200/0, in_queue=1234200, util=98.28% 00:22:32.211 nvme6n1: ios=13233/0, merge=0/0, ticks=1237975/0, in_queue=1237975, util=98.40% 00:22:32.211 nvme7n1: ios=12275/0, merge=0/0, ticks=1222971/0, in_queue=1222971, util=98.86% 00:22:32.211 nvme8n1: ios=8994/0, merge=0/0, ticks=1233571/0, in_queue=1233571, util=99.07% 00:22:32.211 nvme9n1: ios=9909/0, merge=0/0, ticks=1221139/0, in_queue=1221139, util=99.22% 00:22:32.211 03:54:49 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:32.211 [global] 00:22:32.211 thread=1 00:22:32.211 invalidate=1 00:22:32.211 rw=randwrite 00:22:32.211 time_based=1 00:22:32.211 runtime=10 00:22:32.211 ioengine=libaio 00:22:32.211 direct=1 00:22:32.211 bs=262144 00:22:32.211 iodepth=64 00:22:32.211 norandommap=1 00:22:32.211 numjobs=1 00:22:32.211 00:22:32.211 [job0] 00:22:32.211 filename=/dev/nvme0n1 00:22:32.211 [job1] 00:22:32.211 filename=/dev/nvme10n1 00:22:32.211 [job2] 00:22:32.211 filename=/dev/nvme1n1 00:22:32.211 [job3] 00:22:32.211 filename=/dev/nvme2n1 00:22:32.211 [job4] 00:22:32.211 filename=/dev/nvme3n1 00:22:32.211 [job5] 00:22:32.211 filename=/dev/nvme4n1 00:22:32.211 [job6] 00:22:32.211 filename=/dev/nvme5n1 00:22:32.211 [job7] 00:22:32.211 filename=/dev/nvme6n1 00:22:32.211 [job8] 00:22:32.211 filename=/dev/nvme7n1 00:22:32.211 [job9] 00:22:32.211 filename=/dev/nvme8n1 00:22:32.211 [job10] 00:22:32.211 filename=/dev/nvme9n1 00:22:32.211 Could not set queue depth (nvme0n1) 00:22:32.211 Could not set queue depth (nvme10n1) 00:22:32.211 Could not set queue depth (nvme1n1) 00:22:32.211 Could not set queue depth (nvme2n1) 00:22:32.211 Could not set queue depth (nvme3n1) 00:22:32.211 Could not set queue depth (nvme4n1) 00:22:32.211 Could not set queue depth (nvme5n1) 00:22:32.211 Could not set queue depth (nvme6n1) 00:22:32.211 Could not set queue depth (nvme7n1) 00:22:32.211 Could not set queue depth (nvme8n1) 00:22:32.211 Could not set queue depth (nvme9n1) 00:22:32.211 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:32.211 fio-3.35 00:22:32.211 Starting 11 threads 00:22:42.190 00:22:42.190 job0: (groupid=0, jobs=1): err= 0: pid=2431694: Sun Jul 14 03:55:00 2024 00:22:42.190 write: IOPS=595, BW=149MiB/s (156MB/s)(1512MiB/10159msec); 0 zone resets 00:22:42.190 slat (usec): min=24, max=148486, avg=1360.37, stdev=4569.08 00:22:42.190 clat (msec): min=2, max=332, avg=106.04, stdev=67.13 00:22:42.190 lat (msec): min=3, max=332, avg=107.40, stdev=67.89 00:22:42.190 clat percentiles (msec): 00:22:42.190 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 29], 20.00th=[ 48], 00:22:42.190 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 80], 60.00th=[ 117], 00:22:42.190 | 70.00th=[ 148], 80.00th=[ 171], 90.00th=[ 203], 95.00th=[ 228], 00:22:42.190 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 326], 00:22:42.190 | 99.99th=[ 334] 00:22:42.190 bw ( KiB/s): min=86016, max=290304, per=13.18%, avg=153225.50, stdev=68064.67, samples=20 00:22:42.190 iops : min= 336, max= 1134, avg=598.50, stdev=265.91, samples=20 00:22:42.190 lat (msec) : 4=0.07%, 10=1.11%, 20=2.96%, 50=16.37%, 100=36.38% 00:22:42.190 lat (msec) : 250=40.46%, 500=2.66% 00:22:42.190 cpu : usr=1.78%, sys=2.16%, ctx=2806, majf=0, minf=1 00:22:42.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:42.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.190 issued rwts: total=0,6048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.190 job1: (groupid=0, jobs=1): err= 0: pid=2431706: Sun Jul 14 03:55:00 2024 00:22:42.190 write: IOPS=481, BW=120MiB/s (126MB/s)(1214MiB/10075msec); 0 zone resets 00:22:42.190 slat (usec): min=19, max=56257, avg=1612.27, stdev=4084.47 00:22:42.190 clat (msec): min=2, max=372, avg=131.11, stdev=64.97 00:22:42.190 lat (msec): min=3, max=372, avg=132.72, stdev=65.82 00:22:42.190 clat percentiles (msec): 00:22:42.190 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 38], 20.00th=[ 68], 00:22:42.190 | 30.00th=[ 107], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 150], 00:22:42.190 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 197], 95.00th=[ 239], 00:22:42.190 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 372], 99.95th=[ 372], 00:22:42.190 | 99.99th=[ 372] 00:22:42.190 bw ( KiB/s): min=67584, max=195072, per=10.55%, avg=122691.35, stdev=32106.97, samples=20 00:22:42.190 iops : min= 264, max= 762, avg=479.25, stdev=125.43, samples=20 00:22:42.190 lat (msec) : 4=0.08%, 10=1.57%, 20=3.54%, 50=8.79%, 100=13.51% 00:22:42.190 lat (msec) : 250=69.01%, 500=3.50% 00:22:42.190 cpu : usr=1.35%, sys=1.74%, ctx=2363, majf=0, minf=1 00:22:42.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:42.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.190 issued rwts: total=0,4856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.190 job2: (groupid=0, jobs=1): err= 0: pid=2431709: Sun Jul 14 03:55:00 2024 00:22:42.190 write: IOPS=349, BW=87.4MiB/s (91.6MB/s)(890MiB/10179msec); 0 zone resets 00:22:42.190 slat (usec): min=25, max=361266, avg=1874.79, stdev=7640.10 00:22:42.190 clat (msec): min=3, max=1179, avg=181.13, stdev=145.51 00:22:42.190 lat (msec): min=3, max=1186, avg=183.00, stdev=146.27 00:22:42.190 clat percentiles (msec): 00:22:42.190 | 1.00th=[ 20], 5.00th=[ 56], 10.00th=[ 78], 20.00th=[ 104], 00:22:42.190 | 30.00th=[ 124], 40.00th=[ 148], 50.00th=[ 167], 60.00th=[ 182], 00:22:42.190 | 70.00th=[ 194], 80.00th=[ 215], 90.00th=[ 251], 95.00th=[ 334], 00:22:42.190 | 99.00th=[ 1036], 99.50th=[ 1116], 99.90th=[ 1167], 99.95th=[ 1183], 00:22:42.190 | 99.99th=[ 1183] 00:22:42.190 bw ( KiB/s): min=22528, max=167936, per=7.69%, avg=89479.35, stdev=34366.92, samples=20 00:22:42.190 iops : min= 88, max= 656, avg=349.50, stdev=134.26, samples=20 00:22:42.190 lat (msec) : 4=0.03%, 10=0.28%, 20=0.82%, 50=2.95%, 100=14.90% 00:22:42.190 lat (msec) : 250=70.88%, 500=7.81%, 1000=1.01%, 2000=1.32% 00:22:42.190 cpu : usr=0.98%, sys=1.41%, ctx=1957, majf=0, minf=1 00:22:42.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:22:42.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.190 issued rwts: total=0,3558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.190 job3: (groupid=0, jobs=1): err= 0: pid=2431710: Sun Jul 14 03:55:00 2024 00:22:42.190 write: IOPS=413, BW=103MiB/s (108MB/s)(1050MiB/10161msec); 0 zone resets 00:22:42.190 slat (usec): min=23, max=229853, avg=1425.25, stdev=6413.00 00:22:42.190 clat (msec): min=5, max=1199, avg=153.34, stdev=120.51 00:22:42.190 lat (msec): min=5, max=1199, avg=154.76, stdev=121.13 00:22:42.190 clat percentiles (msec): 00:22:42.190 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 52], 20.00th=[ 69], 00:22:42.190 | 30.00th=[ 82], 40.00th=[ 116], 50.00th=[ 133], 60.00th=[ 148], 00:22:42.190 | 70.00th=[ 169], 80.00th=[ 205], 90.00th=[ 259], 95.00th=[ 384], 00:22:42.190 | 99.00th=[ 558], 99.50th=[ 760], 99.90th=[ 1070], 99.95th=[ 1070], 00:22:42.190 | 99.99th=[ 1200] 00:22:42.190 bw ( KiB/s): min=54784, max=209408, per=9.10%, avg=105854.45, stdev=44026.01, samples=20 00:22:42.190 iops : min= 214, max= 818, avg=413.45, stdev=171.98, samples=20 00:22:42.190 lat (msec) : 10=0.48%, 20=2.48%, 50=6.67%, 100=25.89%, 250=52.41% 00:22:42.190 lat (msec) : 500=9.48%, 750=2.10%, 1000=0.19%, 2000=0.31% 00:22:42.190 cpu : usr=1.23%, sys=1.20%, ctx=2430, majf=0, minf=1 00:22:42.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:42.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.190 issued rwts: total=0,4198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.190 job4: (groupid=0, jobs=1): err= 0: pid=2431711: Sun Jul 14 03:55:00 2024 00:22:42.190 write: IOPS=398, BW=99.6MiB/s (104MB/s)(1017MiB/10206msec); 0 zone resets 00:22:42.190 slat (usec): min=14, max=124791, avg=1291.51, stdev=5088.04 00:22:42.190 clat (msec): min=2, max=1345, avg=159.22, stdev=125.50 00:22:42.190 lat (msec): min=2, max=1345, avg=160.52, stdev=126.09 00:22:42.190 clat percentiles (msec): 00:22:42.190 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 47], 20.00th=[ 84], 00:22:42.190 | 30.00th=[ 106], 40.00th=[ 127], 50.00th=[ 140], 60.00th=[ 155], 00:22:42.190 | 70.00th=[ 167], 80.00th=[ 215], 90.00th=[ 259], 95.00th=[ 326], 00:22:42.190 | 99.00th=[ 550], 99.50th=[ 1116], 99.90th=[ 1150], 99.95th=[ 1334], 00:22:42.190 | 99.99th=[ 1351] 00:22:42.190 bw ( KiB/s): min=57856, max=168448, per=8.81%, avg=102498.10, stdev=31105.70, samples=20 00:22:42.190 iops : min= 226, max= 658, avg=400.35, stdev=121.53, samples=20 00:22:42.190 lat (msec) : 4=0.17%, 10=0.91%, 20=4.03%, 50=6.39%, 100=15.71% 00:22:42.190 lat (msec) : 250=60.14%, 500=10.11%, 750=1.84%, 1000=0.07%, 2000=0.61% 00:22:42.190 cpu : usr=1.19%, sys=1.24%, ctx=2787, majf=0, minf=1 00:22:42.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:42.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.190 issued rwts: total=0,4067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.190 job5: (groupid=0, jobs=1): err= 0: pid=2431718: Sun Jul 14 03:55:00 2024 00:22:42.190 write: IOPS=411, BW=103MiB/s (108MB/s)(1037MiB/10075msec); 0 zone resets 00:22:42.190 slat (usec): min=14, max=824330, avg=1880.46, stdev=13498.54 00:22:42.190 clat (msec): min=2, max=1218, avg=153.51, stdev=135.53 00:22:42.190 lat (msec): min=3, max=1222, avg=155.39, stdev=136.75 00:22:42.190 clat percentiles (msec): 00:22:42.190 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 83], 00:22:42.190 | 30.00th=[ 105], 40.00th=[ 123], 50.00th=[ 140], 60.00th=[ 155], 00:22:42.190 | 70.00th=[ 171], 80.00th=[ 188], 90.00th=[ 236], 95.00th=[ 271], 00:22:42.190 | 99.00th=[ 1045], 99.50th=[ 1133], 99.90th=[ 1217], 99.95th=[ 1217], 00:22:42.190 | 99.99th=[ 1217] 00:22:42.190 bw ( KiB/s): min=11264, max=181760, per=8.99%, avg=104580.60, stdev=46032.10, samples=20 00:22:42.190 iops : min= 44, max= 710, avg=408.50, stdev=179.82, samples=20 00:22:42.190 lat (msec) : 4=0.05%, 10=0.29%, 20=1.01%, 50=9.50%, 100=17.43% 00:22:42.190 lat (msec) : 250=64.63%, 500=5.57%, 1000=0.14%, 2000=1.37% 00:22:42.190 cpu : usr=1.17%, sys=1.69%, ctx=2067, majf=0, minf=1 00:22:42.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:42.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.190 issued rwts: total=0,4148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.190 job6: (groupid=0, jobs=1): err= 0: pid=2431719: Sun Jul 14 03:55:00 2024 00:22:42.190 write: IOPS=385, BW=96.3MiB/s (101MB/s)(981MiB/10186msec); 0 zone resets 00:22:42.190 slat (usec): min=19, max=70548, avg=1851.53, stdev=4604.52 00:22:42.190 clat (usec): min=1520, max=521246, avg=164057.01, stdev=75431.54 00:22:42.190 lat (usec): min=1553, max=521357, avg=165908.54, stdev=76058.95 00:22:42.190 clat percentiles (msec): 00:22:42.190 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 74], 20.00th=[ 118], 00:22:42.190 | 30.00th=[ 126], 40.00th=[ 144], 50.00th=[ 157], 60.00th=[ 169], 00:22:42.190 | 70.00th=[ 186], 80.00th=[ 222], 90.00th=[ 257], 95.00th=[ 292], 00:22:42.190 | 99.00th=[ 405], 99.50th=[ 460], 99.90th=[ 523], 99.95th=[ 523], 00:22:42.190 | 99.99th=[ 523] 00:22:42.190 bw ( KiB/s): min=56832, max=223232, per=8.49%, avg=98767.85, stdev=37541.10, samples=20 00:22:42.191 iops : min= 222, max= 872, avg=385.75, stdev=146.66, samples=20 00:22:42.191 lat (msec) : 2=0.05%, 4=0.25%, 10=0.82%, 20=1.15%, 50=4.74% 00:22:42.191 lat (msec) : 100=6.35%, 250=73.69%, 500=12.70%, 750=0.25% 00:22:42.191 cpu : usr=1.21%, sys=1.10%, ctx=2022, majf=0, minf=1 00:22:42.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:42.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.191 issued rwts: total=0,3922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.191 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.191 job7: (groupid=0, jobs=1): err= 0: pid=2431720: Sun Jul 14 03:55:00 2024 00:22:42.191 write: IOPS=340, BW=85.2MiB/s (89.3MB/s)(869MiB/10203msec); 0 zone resets 00:22:42.191 slat (usec): min=15, max=819527, avg=2148.96, stdev=18231.65 00:22:42.191 clat (usec): min=1900, max=1345.8k, avg=185556.13, stdev=220406.70 00:22:42.191 lat (usec): min=1950, max=1345.9k, avg=187705.09, stdev=221766.74 00:22:42.191 clat percentiles (msec): 00:22:42.191 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 54], 20.00th=[ 66], 00:22:42.191 | 30.00th=[ 87], 40.00th=[ 114], 50.00th=[ 128], 60.00th=[ 150], 00:22:42.191 | 70.00th=[ 178], 80.00th=[ 211], 90.00th=[ 288], 95.00th=[ 936], 00:22:42.191 | 99.00th=[ 1150], 99.50th=[ 1167], 99.90th=[ 1334], 99.95th=[ 1351], 00:22:42.191 | 99.99th=[ 1351] 00:22:42.191 bw ( KiB/s): min= 8192, max=190464, per=7.91%, avg=91971.37, stdev=60626.85, samples=19 00:22:42.191 iops : min= 32, max= 744, avg=359.26, stdev=236.82, samples=19 00:22:42.191 lat (msec) : 2=0.03%, 4=0.09%, 10=0.75%, 20=1.52%, 50=5.44% 00:22:42.191 lat (msec) : 100=26.14%, 250=52.75%, 500=7.82%, 750=0.23%, 1000=2.19% 00:22:42.191 lat (msec) : 2000=3.05% 00:22:42.191 cpu : usr=1.04%, sys=1.13%, ctx=1580, majf=0, minf=1 00:22:42.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:22:42.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.191 issued rwts: total=0,3477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.191 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.191 job8: (groupid=0, jobs=1): err= 0: pid=2431722: Sun Jul 14 03:55:00 2024 00:22:42.191 write: IOPS=443, BW=111MiB/s (116MB/s)(1122MiB/10113msec); 0 zone resets 00:22:42.191 slat (usec): min=16, max=89457, avg=1602.00, stdev=4905.67 00:22:42.191 clat (msec): min=3, max=562, avg=142.62, stdev=79.42 00:22:42.191 lat (msec): min=3, max=562, avg=144.22, stdev=80.21 00:22:42.191 clat percentiles (msec): 00:22:42.191 | 1.00th=[ 17], 5.00th=[ 45], 10.00th=[ 61], 20.00th=[ 74], 00:22:42.191 | 30.00th=[ 84], 40.00th=[ 92], 50.00th=[ 136], 60.00th=[ 167], 00:22:42.191 | 70.00th=[ 186], 80.00th=[ 203], 90.00th=[ 236], 95.00th=[ 275], 00:22:42.191 | 99.00th=[ 376], 99.50th=[ 451], 99.90th=[ 558], 99.95th=[ 558], 00:22:42.191 | 99.99th=[ 567] 00:22:42.191 bw ( KiB/s): min=67072, max=268288, per=9.74%, avg=113229.80, stdev=55577.95, samples=20 00:22:42.191 iops : min= 262, max= 1048, avg=442.25, stdev=217.10, samples=20 00:22:42.191 lat (msec) : 4=0.02%, 10=0.20%, 20=1.09%, 50=5.28%, 100=35.64% 00:22:42.191 lat (msec) : 250=49.87%, 500=7.45%, 750=0.45% 00:22:42.191 cpu : usr=1.44%, sys=1.18%, ctx=2237, majf=0, minf=1 00:22:42.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:42.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.191 issued rwts: total=0,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.191 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.191 job9: (groupid=0, jobs=1): err= 0: pid=2431723: Sun Jul 14 03:55:00 2024 00:22:42.191 write: IOPS=355, BW=88.9MiB/s (93.3MB/s)(907MiB/10194msec); 0 zone resets 00:22:42.191 slat (usec): min=24, max=944821, avg=1820.61, stdev=17524.49 00:22:42.191 clat (msec): min=3, max=1177, avg=177.91, stdev=162.48 00:22:42.191 lat (msec): min=3, max=1230, avg=179.73, stdev=163.83 00:22:42.191 clat percentiles (msec): 00:22:42.191 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 69], 00:22:42.191 | 30.00th=[ 108], 40.00th=[ 130], 50.00th=[ 155], 60.00th=[ 180], 00:22:42.191 | 70.00th=[ 209], 80.00th=[ 232], 90.00th=[ 279], 95.00th=[ 401], 00:22:42.191 | 99.00th=[ 1133], 99.50th=[ 1167], 99.90th=[ 1167], 99.95th=[ 1183], 00:22:42.191 | 99.99th=[ 1183] 00:22:42.191 bw ( KiB/s): min=39936, max=176640, per=8.26%, avg=96010.58, stdev=38111.52, samples=19 00:22:42.191 iops : min= 156, max= 690, avg=375.00, stdev=148.89, samples=19 00:22:42.191 lat (msec) : 4=0.06%, 10=0.72%, 20=3.03%, 50=9.62%, 100=14.75% 00:22:42.191 lat (msec) : 250=57.95%, 500=10.42%, 750=1.71%, 2000=1.74% 00:22:42.191 cpu : usr=1.11%, sys=1.27%, ctx=2356, majf=0, minf=1 00:22:42.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:42.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.191 issued rwts: total=0,3627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.191 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.191 job10: (groupid=0, jobs=1): err= 0: pid=2431725: Sun Jul 14 03:55:00 2024 00:22:42.191 write: IOPS=391, BW=97.8MiB/s (103MB/s)(993MiB/10158msec); 0 zone resets 00:22:42.191 slat (usec): min=18, max=104148, avg=2055.46, stdev=4991.43 00:22:42.191 clat (msec): min=3, max=589, avg=161.49, stdev=89.10 00:22:42.191 lat (msec): min=4, max=589, avg=163.55, stdev=89.95 00:22:42.191 clat percentiles (msec): 00:22:42.191 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 42], 20.00th=[ 92], 00:22:42.191 | 30.00th=[ 126], 40.00th=[ 142], 50.00th=[ 157], 60.00th=[ 171], 00:22:42.191 | 70.00th=[ 186], 80.00th=[ 222], 90.00th=[ 266], 95.00th=[ 309], 00:22:42.191 | 99.00th=[ 510], 99.50th=[ 542], 99.90th=[ 584], 99.95th=[ 592], 00:22:42.191 | 99.99th=[ 592] 00:22:42.191 bw ( KiB/s): min=59392, max=179200, per=8.61%, avg=100096.00, stdev=33272.95, samples=20 00:22:42.191 iops : min= 232, max= 700, avg=391.00, stdev=129.97, samples=20 00:22:42.191 lat (msec) : 4=0.03%, 10=2.09%, 20=3.20%, 50=5.89%, 100=11.30% 00:22:42.191 lat (msec) : 250=64.38%, 500=11.93%, 750=1.18% 00:22:42.191 cpu : usr=1.08%, sys=1.24%, ctx=1822, majf=0, minf=1 00:22:42.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:42.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.191 issued rwts: total=0,3973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.191 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.191 00:22:42.191 Run status group 0 (all jobs): 00:22:42.191 WRITE: bw=1136MiB/s (1191MB/s), 85.2MiB/s-149MiB/s (89.3MB/s-156MB/s), io=11.3GiB (12.2GB), run=10075-10206msec 00:22:42.191 00:22:42.191 Disk stats (read/write): 00:22:42.191 nvme0n1: ios=46/12089, merge=0/0, ticks=3877/1206575, in_queue=1210452, util=99.73% 00:22:42.191 nvme10n1: ios=45/9452, merge=0/0, ticks=2322/1212880, in_queue=1215202, util=99.97% 00:22:42.191 nvme1n1: ios=46/7098, merge=0/0, ticks=123/1244089, in_queue=1244212, util=98.12% 00:22:42.191 nvme2n1: ios=45/8394, merge=0/0, ticks=3845/1245037, in_queue=1248882, util=99.91% 00:22:42.191 nvme3n1: ios=0/8084, merge=0/0, ticks=0/1238743, in_queue=1238743, util=97.75% 00:22:42.191 nvme4n1: ios=0/8028, merge=0/0, ticks=0/1218309, in_queue=1218309, util=98.06% 00:22:42.191 nvme5n1: ios=38/7824, merge=0/0, ticks=757/1242642, in_queue=1243399, util=99.95% 00:22:42.191 nvme6n1: ios=0/6896, merge=0/0, ticks=0/1223316, in_queue=1223316, util=98.35% 00:22:42.191 nvme7n1: ios=0/8795, merge=0/0, ticks=0/1199849, in_queue=1199849, util=98.80% 00:22:42.191 nvme8n1: ios=40/7224, merge=0/0, ticks=1077/1241651, in_queue=1242728, util=99.94% 00:22:42.191 nvme9n1: ios=0/7941, merge=0/0, ticks=0/1242521, in_queue=1242521, util=99.10% 00:22:42.191 03:55:00 -- target/multiconnection.sh@36 -- # sync 00:22:42.191 03:55:00 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:42.191 03:55:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.191 03:55:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:42.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:42.191 03:55:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:42.191 03:55:00 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.191 03:55:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.191 03:55:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:42.191 03:55:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:42.191 03:55:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:42.191 03:55:00 -- common/autotest_common.sh@1210 -- # return 0 00:22:42.191 03:55:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.191 03:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.191 03:55:00 -- common/autotest_common.sh@10 -- # set +x 00:22:42.191 03:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.191 03:55:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.191 03:55:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:42.191 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:42.191 03:55:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:42.191 03:55:00 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.191 03:55:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.191 03:55:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:42.191 03:55:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:42.191 03:55:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:42.191 03:55:00 -- common/autotest_common.sh@1210 -- # return 0 00:22:42.191 03:55:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:42.191 03:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.191 03:55:00 -- common/autotest_common.sh@10 -- # set +x 00:22:42.191 03:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.191 03:55:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.191 03:55:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:42.450 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:42.450 03:55:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:42.450 03:55:01 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.450 03:55:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.450 03:55:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:42.450 03:55:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:42.450 03:55:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:42.450 03:55:01 -- common/autotest_common.sh@1210 -- # return 0 00:22:42.450 03:55:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:42.450 03:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.450 03:55:01 -- common/autotest_common.sh@10 -- # set +x 00:22:42.450 03:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.450 03:55:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.450 03:55:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:42.709 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:42.709 03:55:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:42.709 03:55:01 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.709 03:55:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.709 03:55:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:42.709 03:55:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:42.709 03:55:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:42.709 03:55:01 -- common/autotest_common.sh@1210 -- # return 0 00:22:42.709 03:55:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:42.709 03:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.709 03:55:01 -- common/autotest_common.sh@10 -- # set +x 00:22:42.709 03:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.709 03:55:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.709 03:55:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:42.967 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:42.967 03:55:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:42.967 03:55:01 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.967 03:55:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.967 03:55:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:42.967 03:55:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:42.967 03:55:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:42.967 03:55:01 -- common/autotest_common.sh@1210 -- # return 0 00:22:42.967 03:55:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:42.967 03:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.967 03:55:01 -- common/autotest_common.sh@10 -- # set +x 00:22:42.967 03:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.967 03:55:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:42.967 03:55:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:42.967 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:42.967 03:55:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:42.967 03:55:01 -- common/autotest_common.sh@1198 -- # local i=0 00:22:42.967 03:55:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:42.967 03:55:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:43.225 03:55:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.225 03:55:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:43.225 03:55:01 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.225 03:55:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:43.225 03:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.225 03:55:01 -- common/autotest_common.sh@10 -- # set +x 00:22:43.225 03:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.225 03:55:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.225 03:55:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:43.484 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:43.484 03:55:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:43.484 03:55:02 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.484 03:55:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.484 03:55:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:43.484 03:55:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.484 03:55:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:43.484 03:55:02 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.484 03:55:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:43.484 03:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.484 03:55:02 -- common/autotest_common.sh@10 -- # set +x 00:22:43.484 03:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.484 03:55:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.484 03:55:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:43.484 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:43.484 03:55:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:43.484 03:55:02 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.484 03:55:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.484 03:55:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:43.484 03:55:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.484 03:55:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:43.484 03:55:02 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.484 03:55:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:43.484 03:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.484 03:55:02 -- common/autotest_common.sh@10 -- # set +x 00:22:43.484 03:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.484 03:55:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.484 03:55:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:43.484 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:43.484 03:55:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:43.484 03:55:02 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.484 03:55:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.484 03:55:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:43.484 03:55:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.484 03:55:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:43.484 03:55:02 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.484 03:55:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:43.484 03:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.484 03:55:02 -- common/autotest_common.sh@10 -- # set +x 00:22:43.484 03:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.484 03:55:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.484 03:55:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:43.742 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:43.742 03:55:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:43.742 03:55:02 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.742 03:55:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.742 03:55:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:43.742 03:55:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:43.742 03:55:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:43.742 03:55:02 -- common/autotest_common.sh@1210 -- # return 0 00:22:43.742 03:55:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:43.742 03:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.742 03:55:02 -- common/autotest_common.sh@10 -- # set +x 00:22:43.742 03:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.742 03:55:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.742 03:55:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:43.742 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:43.742 03:55:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:43.742 03:55:02 -- common/autotest_common.sh@1198 -- # local i=0 00:22:43.742 03:55:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:43.742 03:55:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:44.003 03:55:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:44.003 03:55:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:44.003 03:55:02 -- common/autotest_common.sh@1210 -- # return 0 00:22:44.003 03:55:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:44.003 03:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.003 03:55:02 -- common/autotest_common.sh@10 -- # set +x 00:22:44.003 03:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.003 03:55:02 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:44.003 03:55:02 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:44.003 03:55:02 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:44.003 03:55:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:44.003 03:55:02 -- nvmf/common.sh@116 -- # sync 00:22:44.003 03:55:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:44.003 03:55:02 -- nvmf/common.sh@119 -- # set +e 00:22:44.003 03:55:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:44.003 03:55:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:44.003 rmmod nvme_tcp 00:22:44.003 rmmod nvme_fabrics 00:22:44.003 rmmod nvme_keyring 00:22:44.003 03:55:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:44.003 03:55:02 -- nvmf/common.sh@123 -- # set -e 00:22:44.003 03:55:02 -- nvmf/common.sh@124 -- # return 0 00:22:44.003 03:55:02 -- nvmf/common.sh@477 -- # '[' -n 2425615 ']' 00:22:44.003 03:55:02 -- nvmf/common.sh@478 -- # killprocess 2425615 00:22:44.003 03:55:02 -- common/autotest_common.sh@926 -- # '[' -z 2425615 ']' 00:22:44.003 03:55:02 -- common/autotest_common.sh@930 -- # kill -0 2425615 00:22:44.003 03:55:02 -- common/autotest_common.sh@931 -- # uname 00:22:44.003 03:55:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.003 03:55:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2425615 00:22:44.003 03:55:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:44.003 03:55:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:44.003 03:55:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2425615' 00:22:44.003 killing process with pid 2425615 00:22:44.003 03:55:02 -- common/autotest_common.sh@945 -- # kill 2425615 00:22:44.003 03:55:02 -- common/autotest_common.sh@950 -- # wait 2425615 00:22:44.569 03:55:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:44.569 03:55:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:44.569 03:55:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:44.569 03:55:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.569 03:55:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:44.569 03:55:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.569 03:55:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.569 03:55:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.492 03:55:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:46.492 00:22:46.492 real 1m1.510s 00:22:46.492 user 3m19.143s 00:22:46.492 sys 0m23.234s 00:22:46.492 03:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.492 03:55:05 -- common/autotest_common.sh@10 -- # set +x 00:22:46.492 ************************************ 00:22:46.492 END TEST nvmf_multiconnection 00:22:46.492 ************************************ 00:22:46.492 03:55:05 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:46.492 03:55:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:46.492 03:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:46.492 03:55:05 -- common/autotest_common.sh@10 -- # set +x 00:22:46.492 ************************************ 00:22:46.492 START TEST nvmf_initiator_timeout 00:22:46.492 ************************************ 00:22:46.492 03:55:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:46.751 * Looking for test storage... 00:22:46.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:46.751 03:55:05 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.751 03:55:05 -- nvmf/common.sh@7 -- # uname -s 00:22:46.751 03:55:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.751 03:55:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.751 03:55:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.751 03:55:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.751 03:55:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.751 03:55:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.751 03:55:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.751 03:55:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.751 03:55:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.751 03:55:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.751 03:55:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.751 03:55:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.751 03:55:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.751 03:55:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.751 03:55:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.751 03:55:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.751 03:55:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.751 03:55:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.751 03:55:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.751 03:55:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.751 03:55:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.751 03:55:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.751 03:55:05 -- paths/export.sh@5 -- # export PATH 00:22:46.751 03:55:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.751 03:55:05 -- nvmf/common.sh@46 -- # : 0 00:22:46.751 03:55:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:46.752 03:55:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:46.752 03:55:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:46.752 03:55:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.752 03:55:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.752 03:55:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:46.752 03:55:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:46.752 03:55:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:46.752 03:55:05 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:46.752 03:55:05 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:46.752 03:55:05 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:46.752 03:55:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:46.752 03:55:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.752 03:55:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:46.752 03:55:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:46.752 03:55:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:46.752 03:55:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.752 03:55:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.752 03:55:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.752 03:55:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:46.752 03:55:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:46.752 03:55:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:46.752 03:55:05 -- common/autotest_common.sh@10 -- # set +x 00:22:48.651 03:55:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:48.651 03:55:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:48.651 03:55:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:48.651 03:55:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:48.651 03:55:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:48.651 03:55:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:48.651 03:55:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:48.651 03:55:07 -- nvmf/common.sh@294 -- # net_devs=() 00:22:48.651 03:55:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:48.651 03:55:07 -- nvmf/common.sh@295 -- # e810=() 00:22:48.651 03:55:07 -- nvmf/common.sh@295 -- # local -ga e810 00:22:48.651 03:55:07 -- nvmf/common.sh@296 -- # x722=() 00:22:48.651 03:55:07 -- nvmf/common.sh@296 -- # local -ga x722 00:22:48.651 03:55:07 -- nvmf/common.sh@297 -- # mlx=() 00:22:48.651 03:55:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:48.651 03:55:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.651 03:55:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:48.651 03:55:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:48.651 03:55:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:48.651 03:55:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:48.651 03:55:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:48.651 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:48.651 03:55:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:48.651 03:55:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:48.651 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:48.651 03:55:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:48.651 03:55:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:48.651 03:55:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.651 03:55:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:48.651 03:55:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.651 03:55:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:48.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:48.651 03:55:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.651 03:55:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:48.651 03:55:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.651 03:55:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:48.651 03:55:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.651 03:55:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:48.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:48.651 03:55:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.651 03:55:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:48.651 03:55:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:48.651 03:55:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:48.651 03:55:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.651 03:55:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.651 03:55:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.651 03:55:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:48.651 03:55:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.651 03:55:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.651 03:55:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:48.651 03:55:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.651 03:55:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.651 03:55:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:48.651 03:55:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:48.651 03:55:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.651 03:55:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.651 03:55:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.651 03:55:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.651 03:55:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:48.651 03:55:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.651 03:55:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.651 03:55:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.651 03:55:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:48.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:22:48.651 00:22:48.651 --- 10.0.0.2 ping statistics --- 00:22:48.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.651 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:22:48.651 03:55:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:48.651 00:22:48.651 --- 10.0.0.1 ping statistics --- 00:22:48.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.651 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:48.651 03:55:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.651 03:55:07 -- nvmf/common.sh@410 -- # return 0 00:22:48.651 03:55:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:48.651 03:55:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.651 03:55:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:48.651 03:55:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.651 03:55:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:48.651 03:55:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:48.651 03:55:07 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:48.651 03:55:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:48.651 03:55:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:48.651 03:55:07 -- common/autotest_common.sh@10 -- # set +x 00:22:48.651 03:55:07 -- nvmf/common.sh@469 -- # nvmfpid=2434948 00:22:48.651 03:55:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.651 03:55:07 -- nvmf/common.sh@470 -- # waitforlisten 2434948 00:22:48.651 03:55:07 -- common/autotest_common.sh@819 -- # '[' -z 2434948 ']' 00:22:48.651 03:55:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.651 03:55:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:48.651 03:55:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.651 03:55:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:48.651 03:55:07 -- common/autotest_common.sh@10 -- # set +x 00:22:48.651 [2024-07-14 03:55:07.576747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:48.651 [2024-07-14 03:55:07.576843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.911 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.911 [2024-07-14 03:55:07.649503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.911 [2024-07-14 03:55:07.738633] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.911 [2024-07-14 03:55:07.738819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.911 [2024-07-14 03:55:07.738838] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.911 [2024-07-14 03:55:07.738854] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.911 [2024-07-14 03:55:07.738938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.911 [2024-07-14 03:55:07.738996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.911 [2024-07-14 03:55:07.739113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.911 [2024-07-14 03:55:07.739115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.844 03:55:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:49.844 03:55:08 -- common/autotest_common.sh@852 -- # return 0 00:22:49.844 03:55:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:49.844 03:55:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:49.844 03:55:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.844 03:55:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.844 03:55:08 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:49.844 03:55:08 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.844 03:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.844 03:55:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.844 Malloc0 00:22:49.844 03:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.844 03:55:08 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:49.844 03:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.844 03:55:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.844 Delay0 00:22:49.844 03:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.844 03:55:08 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.844 03:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.844 03:55:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.844 [2024-07-14 03:55:08.544989] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.844 03:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.844 03:55:08 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:49.844 03:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.844 03:55:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.844 03:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.844 03:55:08 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:49.844 03:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.844 03:55:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.844 03:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.844 03:55:08 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.844 03:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.844 03:55:08 -- common/autotest_common.sh@10 -- # set +x 00:22:49.844 [2024-07-14 03:55:08.573200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.844 03:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.845 03:55:08 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:50.442 03:55:09 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:50.442 03:55:09 -- common/autotest_common.sh@1177 -- # local i=0 00:22:50.442 03:55:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:50.442 03:55:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:50.442 03:55:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:52.344 03:55:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:52.344 03:55:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:52.344 03:55:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:52.344 03:55:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:52.344 03:55:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.344 03:55:11 -- common/autotest_common.sh@1187 -- # return 0 00:22:52.344 03:55:11 -- target/initiator_timeout.sh@35 -- # fio_pid=2435398 00:22:52.344 03:55:11 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:52.344 03:55:11 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:52.602 [global] 00:22:52.602 thread=1 00:22:52.602 invalidate=1 00:22:52.602 rw=write 00:22:52.602 time_based=1 00:22:52.602 runtime=60 00:22:52.602 ioengine=libaio 00:22:52.602 direct=1 00:22:52.602 bs=4096 00:22:52.602 iodepth=1 00:22:52.602 norandommap=0 00:22:52.602 numjobs=1 00:22:52.602 00:22:52.602 verify_dump=1 00:22:52.602 verify_backlog=512 00:22:52.602 verify_state_save=0 00:22:52.602 do_verify=1 00:22:52.602 verify=crc32c-intel 00:22:52.602 [job0] 00:22:52.602 filename=/dev/nvme0n1 00:22:52.602 Could not set queue depth (nvme0n1) 00:22:52.602 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:52.602 fio-3.35 00:22:52.602 Starting 1 thread 00:22:55.896 03:55:14 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:55.896 03:55:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.896 03:55:14 -- common/autotest_common.sh@10 -- # set +x 00:22:55.896 true 00:22:55.896 03:55:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.896 03:55:14 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:55.896 03:55:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.896 03:55:14 -- common/autotest_common.sh@10 -- # set +x 00:22:55.896 true 00:22:55.896 03:55:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.896 03:55:14 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:55.896 03:55:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.896 03:55:14 -- common/autotest_common.sh@10 -- # set +x 00:22:55.896 true 00:22:55.896 03:55:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.896 03:55:14 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:55.896 03:55:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.896 03:55:14 -- common/autotest_common.sh@10 -- # set +x 00:22:55.896 true 00:22:55.896 03:55:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.896 03:55:14 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:58.425 03:55:17 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:58.425 03:55:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.425 03:55:17 -- common/autotest_common.sh@10 -- # set +x 00:22:58.425 true 00:22:58.425 03:55:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.425 03:55:17 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:58.425 03:55:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.425 03:55:17 -- common/autotest_common.sh@10 -- # set +x 00:22:58.425 true 00:22:58.425 03:55:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.425 03:55:17 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:58.425 03:55:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.425 03:55:17 -- common/autotest_common.sh@10 -- # set +x 00:22:58.425 true 00:22:58.425 03:55:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.425 03:55:17 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:58.425 03:55:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.425 03:55:17 -- common/autotest_common.sh@10 -- # set +x 00:22:58.425 true 00:22:58.425 03:55:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.425 03:55:17 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:58.425 03:55:17 -- target/initiator_timeout.sh@54 -- # wait 2435398 00:23:54.652 00:23:54.652 job0: (groupid=0, jobs=1): err= 0: pid=2435483: Sun Jul 14 03:56:11 2024 00:23:54.652 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(5860KiB/60030msec) 00:23:54.652 slat (usec): min=5, max=11788, avg=28.83, stdev=307.62 00:23:54.652 clat (usec): min=358, max=41276k, avg=40610.05, stdev=1078235.46 00:23:54.652 lat (usec): min=370, max=41276k, avg=40638.88, stdev=1078235.48 00:23:54.652 clat percentiles (usec): 00:23:54.652 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 412], 00:23:54.652 | 20.00th=[ 437], 30.00th=[ 449], 40.00th=[ 461], 00:23:54.652 | 50.00th=[ 474], 60.00th=[ 494], 70.00th=[ 586], 00:23:54.652 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:23:54.652 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:23:54.652 | 99.95th=[17112761], 99.99th=[17112761] 00:23:54.652 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60030msec); 0 zone resets 00:23:54.652 slat (nsec): min=5809, max=68367, avg=16055.45, stdev=10833.09 00:23:54.652 clat (usec): min=229, max=489, avg=294.82, stdev=51.14 00:23:54.652 lat (usec): min=235, max=533, avg=310.87, stdev=54.40 00:23:54.652 clat percentiles (usec): 00:23:54.652 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:23:54.652 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:23:54.652 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 379], 95.00th=[ 383], 00:23:54.652 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 478], 99.95th=[ 490], 00:23:54.652 | 99.99th=[ 490] 00:23:54.652 bw ( KiB/s): min= 5488, max= 6800, per=100.00%, avg=6144.00, stdev=927.72, samples=2 00:23:54.652 iops : min= 1372, max= 1700, avg=1536.00, stdev=231.93, samples=2 00:23:54.652 lat (usec) : 250=8.76%, 500=73.24%, 750=3.57% 00:23:54.652 lat (msec) : 50=14.40%, >=2000=0.03% 00:23:54.652 cpu : usr=0.07%, sys=0.08%, ctx=3002, majf=0, minf=2 00:23:54.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:54.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.652 issued rwts: total=1465,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:54.652 00:23:54.652 Run status group 0 (all jobs): 00:23:54.652 READ: bw=97.6KiB/s (100.0kB/s), 97.6KiB/s-97.6KiB/s (100.0kB/s-100.0kB/s), io=5860KiB (6001kB), run=60030-60030msec 00:23:54.652 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60030-60030msec 00:23:54.652 00:23:54.652 Disk stats (read/write): 00:23:54.652 nvme0n1: ios=1560/1536, merge=0/0, ticks=18143/432, in_queue=18575, util=99.79% 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:54.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:54.652 03:56:11 -- common/autotest_common.sh@1198 -- # local i=0 00:23:54.652 03:56:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:54.652 03:56:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.652 03:56:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:54.652 03:56:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.652 03:56:11 -- common/autotest_common.sh@1210 -- # return 0 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:54.652 nvmf hotplug test: fio successful as expected 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.652 03:56:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.652 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:23:54.652 03:56:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:54.652 03:56:11 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:54.652 03:56:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:54.652 03:56:11 -- nvmf/common.sh@116 -- # sync 00:23:54.652 03:56:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:54.652 03:56:11 -- nvmf/common.sh@119 -- # set +e 00:23:54.652 03:56:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:54.652 03:56:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:54.652 rmmod nvme_tcp 00:23:54.652 rmmod nvme_fabrics 00:23:54.652 rmmod nvme_keyring 00:23:54.652 03:56:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:54.652 03:56:11 -- nvmf/common.sh@123 -- # set -e 00:23:54.652 03:56:11 -- nvmf/common.sh@124 -- # return 0 00:23:54.652 03:56:11 -- nvmf/common.sh@477 -- # '[' -n 2434948 ']' 00:23:54.652 03:56:11 -- nvmf/common.sh@478 -- # killprocess 2434948 00:23:54.652 03:56:11 -- common/autotest_common.sh@926 -- # '[' -z 2434948 ']' 00:23:54.652 03:56:11 -- common/autotest_common.sh@930 -- # kill -0 2434948 00:23:54.652 03:56:11 -- common/autotest_common.sh@931 -- # uname 00:23:54.652 03:56:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:54.652 03:56:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2434948 00:23:54.652 03:56:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:54.652 03:56:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:54.652 03:56:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2434948' 00:23:54.652 killing process with pid 2434948 00:23:54.652 03:56:11 -- common/autotest_common.sh@945 -- # kill 2434948 00:23:54.652 03:56:11 -- common/autotest_common.sh@950 -- # wait 2434948 00:23:54.652 03:56:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:54.652 03:56:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:54.652 03:56:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:54.652 03:56:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.652 03:56:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:54.652 03:56:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.652 03:56:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.652 03:56:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.590 03:56:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:55.590 00:23:55.590 real 1m8.787s 00:23:55.590 user 4m14.006s 00:23:55.590 sys 0m6.326s 00:23:55.590 03:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.590 03:56:14 -- common/autotest_common.sh@10 -- # set +x 00:23:55.590 ************************************ 00:23:55.590 END TEST nvmf_initiator_timeout 00:23:55.590 ************************************ 00:23:55.590 03:56:14 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:55.590 03:56:14 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:23:55.590 03:56:14 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:23:55.590 03:56:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:55.590 03:56:14 -- common/autotest_common.sh@10 -- # set +x 00:23:57.491 03:56:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:57.491 03:56:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:57.491 03:56:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:57.491 03:56:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:57.491 03:56:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:57.491 03:56:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:57.491 03:56:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:57.491 03:56:16 -- nvmf/common.sh@294 -- # net_devs=() 00:23:57.491 03:56:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:57.491 03:56:16 -- nvmf/common.sh@295 -- # e810=() 00:23:57.491 03:56:16 -- nvmf/common.sh@295 -- # local -ga e810 00:23:57.491 03:56:16 -- nvmf/common.sh@296 -- # x722=() 00:23:57.491 03:56:16 -- nvmf/common.sh@296 -- # local -ga x722 00:23:57.491 03:56:16 -- nvmf/common.sh@297 -- # mlx=() 00:23:57.491 03:56:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:57.491 03:56:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.491 03:56:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:57.491 03:56:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:57.491 03:56:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:57.491 03:56:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.491 03:56:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:57.491 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:57.491 03:56:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.491 03:56:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:57.491 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:57.491 03:56:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:57.491 03:56:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.491 03:56:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.491 03:56:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.491 03:56:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.491 03:56:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:57.491 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:57.491 03:56:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.491 03:56:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.491 03:56:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.491 03:56:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.491 03:56:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.491 03:56:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:57.491 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:57.491 03:56:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.491 03:56:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:57.491 03:56:16 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.491 03:56:16 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:23:57.491 03:56:16 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:57.491 03:56:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:57.491 03:56:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:57.491 03:56:16 -- common/autotest_common.sh@10 -- # set +x 00:23:57.491 ************************************ 00:23:57.491 START TEST nvmf_perf_adq 00:23:57.491 ************************************ 00:23:57.491 03:56:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:57.491 * Looking for test storage... 00:23:57.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:57.491 03:56:16 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.491 03:56:16 -- nvmf/common.sh@7 -- # uname -s 00:23:57.491 03:56:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.491 03:56:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.491 03:56:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.491 03:56:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.491 03:56:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.491 03:56:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.491 03:56:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.491 03:56:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.491 03:56:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.491 03:56:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.491 03:56:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.491 03:56:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:57.491 03:56:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.491 03:56:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.491 03:56:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.491 03:56:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.491 03:56:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.491 03:56:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.491 03:56:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.491 03:56:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.492 03:56:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.492 03:56:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.492 03:56:16 -- paths/export.sh@5 -- # export PATH 00:23:57.492 03:56:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.492 03:56:16 -- nvmf/common.sh@46 -- # : 0 00:23:57.492 03:56:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:57.492 03:56:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:57.492 03:56:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:57.492 03:56:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.492 03:56:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.492 03:56:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:57.492 03:56:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:57.492 03:56:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:57.492 03:56:16 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:57.492 03:56:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:57.492 03:56:16 -- common/autotest_common.sh@10 -- # set +x 00:23:59.397 03:56:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:59.397 03:56:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:59.397 03:56:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:59.397 03:56:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:59.397 03:56:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:59.397 03:56:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:59.397 03:56:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:59.397 03:56:18 -- nvmf/common.sh@294 -- # net_devs=() 00:23:59.397 03:56:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:59.397 03:56:18 -- nvmf/common.sh@295 -- # e810=() 00:23:59.397 03:56:18 -- nvmf/common.sh@295 -- # local -ga e810 00:23:59.397 03:56:18 -- nvmf/common.sh@296 -- # x722=() 00:23:59.397 03:56:18 -- nvmf/common.sh@296 -- # local -ga x722 00:23:59.397 03:56:18 -- nvmf/common.sh@297 -- # mlx=() 00:23:59.397 03:56:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:59.397 03:56:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.397 03:56:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:59.397 03:56:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:59.397 03:56:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:59.397 03:56:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:59.397 03:56:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:59.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:59.397 03:56:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:59.397 03:56:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:59.397 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:59.397 03:56:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:59.397 03:56:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:59.397 03:56:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:59.397 03:56:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.397 03:56:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:59.397 03:56:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.397 03:56:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:59.397 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:59.397 03:56:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.397 03:56:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:59.397 03:56:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.397 03:56:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:59.397 03:56:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.397 03:56:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:59.397 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:59.397 03:56:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.397 03:56:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:59.397 03:56:18 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.397 03:56:18 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:59.397 03:56:18 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:59.397 03:56:18 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:59.397 03:56:18 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:59.965 03:56:18 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:01.871 03:56:20 -- target/perf_adq.sh@54 -- # sleep 5 00:24:07.150 03:56:25 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:07.150 03:56:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:07.150 03:56:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.150 03:56:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:07.150 03:56:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:07.150 03:56:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:07.150 03:56:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.150 03:56:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.150 03:56:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.150 03:56:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:07.150 03:56:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:07.150 03:56:25 -- common/autotest_common.sh@10 -- # set +x 00:24:07.150 03:56:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:07.150 03:56:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:07.150 03:56:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:07.150 03:56:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:07.150 03:56:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:07.150 03:56:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:07.150 03:56:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:07.150 03:56:25 -- nvmf/common.sh@294 -- # net_devs=() 00:24:07.150 03:56:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:07.150 03:56:25 -- nvmf/common.sh@295 -- # e810=() 00:24:07.150 03:56:25 -- nvmf/common.sh@295 -- # local -ga e810 00:24:07.150 03:56:25 -- nvmf/common.sh@296 -- # x722=() 00:24:07.150 03:56:25 -- nvmf/common.sh@296 -- # local -ga x722 00:24:07.150 03:56:25 -- nvmf/common.sh@297 -- # mlx=() 00:24:07.150 03:56:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:07.150 03:56:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.150 03:56:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:07.150 03:56:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:07.150 03:56:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:07.150 03:56:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.150 03:56:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:07.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:07.150 03:56:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:07.150 03:56:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.150 03:56:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:07.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:07.150 03:56:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:07.151 03:56:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.151 03:56:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.151 03:56:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.151 03:56:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.151 03:56:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:07.151 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:07.151 03:56:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.151 03:56:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.151 03:56:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.151 03:56:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.151 03:56:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.151 03:56:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:07.151 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:07.151 03:56:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.151 03:56:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:07.151 03:56:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:07.151 03:56:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:07.151 03:56:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.151 03:56:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.151 03:56:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.151 03:56:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:07.151 03:56:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.151 03:56:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.151 03:56:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:07.151 03:56:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.151 03:56:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.151 03:56:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:07.151 03:56:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:07.151 03:56:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.151 03:56:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.151 03:56:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.151 03:56:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.151 03:56:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:07.151 03:56:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.151 03:56:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.151 03:56:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.151 03:56:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:07.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:07.151 00:24:07.151 --- 10.0.0.2 ping statistics --- 00:24:07.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.151 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:07.151 03:56:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:24:07.151 00:24:07.151 --- 10.0.0.1 ping statistics --- 00:24:07.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.151 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:24:07.151 03:56:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.151 03:56:25 -- nvmf/common.sh@410 -- # return 0 00:24:07.151 03:56:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:07.151 03:56:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.151 03:56:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:07.151 03:56:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.151 03:56:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:07.151 03:56:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:07.151 03:56:25 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:07.151 03:56:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:07.151 03:56:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:07.151 03:56:25 -- common/autotest_common.sh@10 -- # set +x 00:24:07.151 03:56:25 -- nvmf/common.sh@469 -- # nvmfpid=2447370 00:24:07.151 03:56:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:07.151 03:56:25 -- nvmf/common.sh@470 -- # waitforlisten 2447370 00:24:07.151 03:56:25 -- common/autotest_common.sh@819 -- # '[' -z 2447370 ']' 00:24:07.151 03:56:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.151 03:56:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:07.151 03:56:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.151 03:56:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:07.151 03:56:25 -- common/autotest_common.sh@10 -- # set +x 00:24:07.151 [2024-07-14 03:56:25.862300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:07.151 [2024-07-14 03:56:25.862392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.151 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.151 [2024-07-14 03:56:25.930301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.151 [2024-07-14 03:56:26.018428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:07.151 [2024-07-14 03:56:26.018590] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.151 [2024-07-14 03:56:26.018607] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.151 [2024-07-14 03:56:26.018619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.151 [2024-07-14 03:56:26.018686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.151 [2024-07-14 03:56:26.018703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.151 [2024-07-14 03:56:26.018804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.151 [2024-07-14 03:56:26.018806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.151 03:56:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:07.151 03:56:26 -- common/autotest_common.sh@852 -- # return 0 00:24:07.151 03:56:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:07.151 03:56:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:07.151 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 03:56:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.408 03:56:26 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:07.408 03:56:26 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:07.408 03:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.408 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 03:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.408 03:56:26 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:07.408 03:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.408 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 03:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.408 03:56:26 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:07.408 03:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.408 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 [2024-07-14 03:56:26.216806] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.408 03:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.408 03:56:26 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:07.408 03:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.408 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 Malloc1 00:24:07.408 03:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.408 03:56:26 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.408 03:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.408 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 03:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.408 03:56:26 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:07.408 03:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.408 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 03:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.408 03:56:26 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.408 03:56:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.408 03:56:26 -- common/autotest_common.sh@10 -- # set +x 00:24:07.408 [2024-07-14 03:56:26.270077] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.408 03:56:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.408 03:56:26 -- target/perf_adq.sh@73 -- # perfpid=2447401 00:24:07.408 03:56:26 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:07.408 03:56:26 -- target/perf_adq.sh@74 -- # sleep 2 00:24:07.408 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.934 03:56:28 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:09.934 03:56:28 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:09.934 03:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.934 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:24:09.934 03:56:28 -- target/perf_adq.sh@76 -- # wc -l 00:24:09.934 03:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.934 03:56:28 -- target/perf_adq.sh@76 -- # count=4 00:24:09.934 03:56:28 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:09.934 03:56:28 -- target/perf_adq.sh@81 -- # wait 2447401 00:24:18.065 Initializing NVMe Controllers 00:24:18.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:18.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:18.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:18.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:18.065 Initialization complete. Launching workers. 00:24:18.065 ======================================================== 00:24:18.065 Latency(us) 00:24:18.065 Device Information : IOPS MiB/s Average min max 00:24:18.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11803.80 46.11 5422.25 1133.85 11302.54 00:24:18.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11455.50 44.75 5587.14 1465.29 9006.02 00:24:18.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8832.80 34.50 7247.22 3994.49 10545.96 00:24:18.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11275.00 44.04 5677.40 1017.93 10604.84 00:24:18.065 ======================================================== 00:24:18.065 Total : 43367.10 169.40 5903.84 1017.93 11302.54 00:24:18.065 00:24:18.065 03:56:36 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:18.065 03:56:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:18.065 03:56:36 -- nvmf/common.sh@116 -- # sync 00:24:18.065 03:56:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:18.065 03:56:36 -- nvmf/common.sh@119 -- # set +e 00:24:18.065 03:56:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:18.065 03:56:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:18.065 rmmod nvme_tcp 00:24:18.065 rmmod nvme_fabrics 00:24:18.065 rmmod nvme_keyring 00:24:18.065 03:56:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:18.065 03:56:36 -- nvmf/common.sh@123 -- # set -e 00:24:18.065 03:56:36 -- nvmf/common.sh@124 -- # return 0 00:24:18.065 03:56:36 -- nvmf/common.sh@477 -- # '[' -n 2447370 ']' 00:24:18.065 03:56:36 -- nvmf/common.sh@478 -- # killprocess 2447370 00:24:18.065 03:56:36 -- common/autotest_common.sh@926 -- # '[' -z 2447370 ']' 00:24:18.065 03:56:36 -- common/autotest_common.sh@930 -- # kill -0 2447370 00:24:18.065 03:56:36 -- common/autotest_common.sh@931 -- # uname 00:24:18.065 03:56:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:18.065 03:56:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2447370 00:24:18.065 03:56:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:18.065 03:56:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:18.065 03:56:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2447370' 00:24:18.065 killing process with pid 2447370 00:24:18.065 03:56:36 -- common/autotest_common.sh@945 -- # kill 2447370 00:24:18.066 03:56:36 -- common/autotest_common.sh@950 -- # wait 2447370 00:24:18.066 03:56:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:18.066 03:56:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:18.066 03:56:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:18.066 03:56:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.066 03:56:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:18.066 03:56:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.066 03:56:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.066 03:56:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.970 03:56:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:19.970 03:56:38 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:19.970 03:56:38 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:20.903 03:56:39 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:22.806 03:56:41 -- target/perf_adq.sh@54 -- # sleep 5 00:24:28.071 03:56:46 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:28.071 03:56:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:28.071 03:56:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.071 03:56:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:28.071 03:56:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:28.071 03:56:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:28.071 03:56:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.071 03:56:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.071 03:56:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.071 03:56:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:28.071 03:56:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:28.071 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:24:28.071 03:56:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:28.071 03:56:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:28.071 03:56:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:28.071 03:56:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:28.071 03:56:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:28.071 03:56:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:28.071 03:56:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:28.071 03:56:46 -- nvmf/common.sh@294 -- # net_devs=() 00:24:28.071 03:56:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:28.071 03:56:46 -- nvmf/common.sh@295 -- # e810=() 00:24:28.071 03:56:46 -- nvmf/common.sh@295 -- # local -ga e810 00:24:28.071 03:56:46 -- nvmf/common.sh@296 -- # x722=() 00:24:28.071 03:56:46 -- nvmf/common.sh@296 -- # local -ga x722 00:24:28.071 03:56:46 -- nvmf/common.sh@297 -- # mlx=() 00:24:28.071 03:56:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:28.071 03:56:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.071 03:56:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:28.071 03:56:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:28.071 03:56:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:28.071 03:56:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.071 03:56:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.071 03:56:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:28.071 03:56:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.071 03:56:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:28.071 03:56:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.071 03:56:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.071 03:56:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.071 03:56:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.071 03:56:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.071 03:56:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.071 03:56:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:28.071 03:56:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.071 03:56:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:28.071 03:56:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.071 03:56:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.071 03:56:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.071 03:56:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:28.071 03:56:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:28.071 03:56:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:28.071 03:56:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:28.071 03:56:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.071 03:56:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.071 03:56:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.071 03:56:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:28.071 03:56:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.071 03:56:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.071 03:56:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:28.071 03:56:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.071 03:56:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.071 03:56:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:28.071 03:56:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:28.071 03:56:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.071 03:56:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.071 03:56:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.071 03:56:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.071 03:56:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:28.071 03:56:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.071 03:56:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.071 03:56:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.071 03:56:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:28.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:24:28.071 00:24:28.071 --- 10.0.0.2 ping statistics --- 00:24:28.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.071 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:28.071 03:56:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:24:28.071 00:24:28.071 --- 10.0.0.1 ping statistics --- 00:24:28.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.072 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:24:28.072 03:56:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.072 03:56:46 -- nvmf/common.sh@410 -- # return 0 00:24:28.072 03:56:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:28.072 03:56:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.072 03:56:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:28.072 03:56:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:28.072 03:56:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.072 03:56:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:28.072 03:56:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:28.072 03:56:46 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:28.072 03:56:46 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:28.072 03:56:46 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:28.072 03:56:46 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:28.072 net.core.busy_poll = 1 00:24:28.072 03:56:46 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:28.072 net.core.busy_read = 1 00:24:28.072 03:56:46 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:28.072 03:56:46 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:28.072 03:56:46 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:28.072 03:56:46 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:28.072 03:56:46 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:28.072 03:56:46 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:28.072 03:56:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:28.072 03:56:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:28.072 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:24:28.072 03:56:46 -- nvmf/common.sh@469 -- # nvmfpid=2450094 00:24:28.072 03:56:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:28.072 03:56:46 -- nvmf/common.sh@470 -- # waitforlisten 2450094 00:24:28.072 03:56:46 -- common/autotest_common.sh@819 -- # '[' -z 2450094 ']' 00:24:28.072 03:56:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.072 03:56:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:28.072 03:56:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.072 03:56:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:28.072 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:24:28.072 [2024-07-14 03:56:46.838473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:28.072 [2024-07-14 03:56:46.838550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.072 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.072 [2024-07-14 03:56:46.904446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.072 [2024-07-14 03:56:46.988433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:28.072 [2024-07-14 03:56:46.988589] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.072 [2024-07-14 03:56:46.988606] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.072 [2024-07-14 03:56:46.988619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.072 [2024-07-14 03:56:46.988669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.072 [2024-07-14 03:56:46.988729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.072 [2024-07-14 03:56:46.988794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.072 [2024-07-14 03:56:46.988797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.330 03:56:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:28.330 03:56:47 -- common/autotest_common.sh@852 -- # return 0 00:24:28.330 03:56:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:28.330 03:56:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:28.330 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.330 03:56:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.330 03:56:47 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:28.330 03:56:47 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:28.330 03:56:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.330 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.330 03:56:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.330 03:56:47 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:28.330 03:56:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.330 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.330 03:56:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.330 03:56:47 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:28.330 03:56:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.330 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.330 [2024-07-14 03:56:47.184326] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.331 03:56:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.331 03:56:47 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:28.331 03:56:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.331 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.331 Malloc1 00:24:28.331 03:56:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.331 03:56:47 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.331 03:56:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.331 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.331 03:56:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.331 03:56:47 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:28.331 03:56:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.331 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.331 03:56:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.331 03:56:47 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.331 03:56:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.331 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:24:28.331 [2024-07-14 03:56:47.235894] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.331 03:56:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.331 03:56:47 -- target/perf_adq.sh@94 -- # perfpid=2450168 00:24:28.331 03:56:47 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:28.331 03:56:47 -- target/perf_adq.sh@95 -- # sleep 2 00:24:28.331 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.859 03:56:49 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:30.859 03:56:49 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:30.859 03:56:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:30.859 03:56:49 -- target/perf_adq.sh@97 -- # wc -l 00:24:30.859 03:56:49 -- common/autotest_common.sh@10 -- # set +x 00:24:30.859 03:56:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:30.859 03:56:49 -- target/perf_adq.sh@97 -- # count=2 00:24:30.859 03:56:49 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:30.859 03:56:49 -- target/perf_adq.sh@103 -- # wait 2450168 00:24:38.969 Initializing NVMe Controllers 00:24:38.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:38.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:38.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:38.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:38.969 Initialization complete. Launching workers. 00:24:38.969 ======================================================== 00:24:38.969 Latency(us) 00:24:38.969 Device Information : IOPS MiB/s Average min max 00:24:38.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5938.02 23.20 10778.72 1827.21 54191.25 00:24:38.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8409.29 32.85 7610.22 1271.38 51235.16 00:24:38.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7176.01 28.03 8937.51 1623.16 52818.40 00:24:38.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7519.80 29.37 8510.74 1548.80 52578.97 00:24:38.969 ======================================================== 00:24:38.969 Total : 29043.13 113.45 8819.15 1271.38 54191.25 00:24:38.969 00:24:38.969 03:56:57 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:38.969 03:56:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:38.969 03:56:57 -- nvmf/common.sh@116 -- # sync 00:24:38.969 03:56:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:38.969 03:56:57 -- nvmf/common.sh@119 -- # set +e 00:24:38.969 03:56:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:38.969 03:56:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:38.969 rmmod nvme_tcp 00:24:38.969 rmmod nvme_fabrics 00:24:38.969 rmmod nvme_keyring 00:24:38.969 03:56:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:38.969 03:56:57 -- nvmf/common.sh@123 -- # set -e 00:24:38.969 03:56:57 -- nvmf/common.sh@124 -- # return 0 00:24:38.969 03:56:57 -- nvmf/common.sh@477 -- # '[' -n 2450094 ']' 00:24:38.969 03:56:57 -- nvmf/common.sh@478 -- # killprocess 2450094 00:24:38.969 03:56:57 -- common/autotest_common.sh@926 -- # '[' -z 2450094 ']' 00:24:38.969 03:56:57 -- common/autotest_common.sh@930 -- # kill -0 2450094 00:24:38.969 03:56:57 -- common/autotest_common.sh@931 -- # uname 00:24:38.969 03:56:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:38.969 03:56:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2450094 00:24:38.969 03:56:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:38.969 03:56:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:38.969 03:56:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2450094' 00:24:38.969 killing process with pid 2450094 00:24:38.969 03:56:57 -- common/autotest_common.sh@945 -- # kill 2450094 00:24:38.969 03:56:57 -- common/autotest_common.sh@950 -- # wait 2450094 00:24:38.969 03:56:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:38.969 03:56:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:38.969 03:56:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:38.969 03:56:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.969 03:56:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:38.969 03:56:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.969 03:56:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.969 03:56:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.877 03:56:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:40.877 03:56:59 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:40.877 00:24:40.877 real 0m43.640s 00:24:40.877 user 2m29.458s 00:24:40.877 sys 0m12.845s 00:24:40.877 03:56:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.877 03:56:59 -- common/autotest_common.sh@10 -- # set +x 00:24:40.877 ************************************ 00:24:40.877 END TEST nvmf_perf_adq 00:24:40.877 ************************************ 00:24:40.877 03:56:59 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:40.877 03:56:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:40.877 03:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:40.877 03:56:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.138 ************************************ 00:24:41.138 START TEST nvmf_shutdown 00:24:41.138 ************************************ 00:24:41.138 03:56:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:41.138 * Looking for test storage... 00:24:41.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.138 03:56:59 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.138 03:56:59 -- nvmf/common.sh@7 -- # uname -s 00:24:41.138 03:56:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.138 03:56:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.138 03:56:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.138 03:56:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.138 03:56:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.138 03:56:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.138 03:56:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.138 03:56:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.138 03:56:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.138 03:56:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.138 03:56:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.138 03:56:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.138 03:56:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.138 03:56:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.138 03:56:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.138 03:56:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.138 03:56:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.138 03:56:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.138 03:56:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.138 03:56:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.138 03:56:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.138 03:56:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.138 03:56:59 -- paths/export.sh@5 -- # export PATH 00:24:41.139 03:56:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.139 03:56:59 -- nvmf/common.sh@46 -- # : 0 00:24:41.139 03:56:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:41.139 03:56:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:41.139 03:56:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:41.139 03:56:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.139 03:56:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.139 03:56:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:41.139 03:56:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:41.139 03:56:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:41.139 03:56:59 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:41.139 03:56:59 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:41.139 03:56:59 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:41.139 03:56:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:41.139 03:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:41.139 03:56:59 -- common/autotest_common.sh@10 -- # set +x 00:24:41.139 ************************************ 00:24:41.139 START TEST nvmf_shutdown_tc1 00:24:41.139 ************************************ 00:24:41.139 03:56:59 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:41.139 03:56:59 -- target/shutdown.sh@74 -- # starttarget 00:24:41.139 03:56:59 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:41.139 03:56:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:41.139 03:56:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.139 03:56:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:41.139 03:56:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:41.139 03:56:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:41.139 03:56:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.139 03:56:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.139 03:56:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.139 03:56:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:41.139 03:56:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:41.139 03:56:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:41.139 03:56:59 -- common/autotest_common.sh@10 -- # set +x 00:24:43.090 03:57:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:43.090 03:57:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:43.090 03:57:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:43.090 03:57:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:43.090 03:57:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:43.090 03:57:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:43.090 03:57:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:43.090 03:57:01 -- nvmf/common.sh@294 -- # net_devs=() 00:24:43.090 03:57:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:43.090 03:57:01 -- nvmf/common.sh@295 -- # e810=() 00:24:43.090 03:57:01 -- nvmf/common.sh@295 -- # local -ga e810 00:24:43.090 03:57:01 -- nvmf/common.sh@296 -- # x722=() 00:24:43.090 03:57:01 -- nvmf/common.sh@296 -- # local -ga x722 00:24:43.090 03:57:01 -- nvmf/common.sh@297 -- # mlx=() 00:24:43.090 03:57:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:43.090 03:57:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.090 03:57:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:43.090 03:57:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:43.090 03:57:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:43.090 03:57:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.090 03:57:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:43.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:43.090 03:57:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.090 03:57:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:43.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:43.090 03:57:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:43.090 03:57:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.090 03:57:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.090 03:57:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.090 03:57:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.090 03:57:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:43.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:43.090 03:57:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.090 03:57:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.090 03:57:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.090 03:57:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.090 03:57:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.090 03:57:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:43.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:43.090 03:57:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.090 03:57:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:43.090 03:57:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:43.090 03:57:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:43.090 03:57:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.090 03:57:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.090 03:57:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.090 03:57:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:43.090 03:57:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.090 03:57:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.090 03:57:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:43.090 03:57:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.090 03:57:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.090 03:57:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:43.090 03:57:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:43.090 03:57:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.090 03:57:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.090 03:57:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.090 03:57:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.090 03:57:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:43.090 03:57:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.090 03:57:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.090 03:57:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.090 03:57:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:43.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:24:43.090 00:24:43.090 --- 10.0.0.2 ping statistics --- 00:24:43.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.090 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:24:43.090 03:57:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:43.090 00:24:43.090 --- 10.0.0.1 ping statistics --- 00:24:43.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.090 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:43.090 03:57:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.090 03:57:01 -- nvmf/common.sh@410 -- # return 0 00:24:43.090 03:57:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:43.090 03:57:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.090 03:57:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:43.090 03:57:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.090 03:57:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:43.090 03:57:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:43.090 03:57:01 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:43.090 03:57:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:43.090 03:57:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:43.090 03:57:01 -- common/autotest_common.sh@10 -- # set +x 00:24:43.090 03:57:01 -- nvmf/common.sh@469 -- # nvmfpid=2453447 00:24:43.090 03:57:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:43.090 03:57:01 -- nvmf/common.sh@470 -- # waitforlisten 2453447 00:24:43.090 03:57:01 -- common/autotest_common.sh@819 -- # '[' -z 2453447 ']' 00:24:43.090 03:57:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.090 03:57:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:43.090 03:57:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.090 03:57:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:43.090 03:57:01 -- common/autotest_common.sh@10 -- # set +x 00:24:43.090 [2024-07-14 03:57:01.974091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:43.090 [2024-07-14 03:57:01.974199] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.090 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.349 [2024-07-14 03:57:02.043289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.349 [2024-07-14 03:57:02.132826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:43.349 [2024-07-14 03:57:02.133016] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.349 [2024-07-14 03:57:02.133034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.349 [2024-07-14 03:57:02.133048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.349 [2024-07-14 03:57:02.133141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.349 [2024-07-14 03:57:02.133348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.349 [2024-07-14 03:57:02.133402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:43.349 [2024-07-14 03:57:02.133404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.280 03:57:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:44.280 03:57:02 -- common/autotest_common.sh@852 -- # return 0 00:24:44.280 03:57:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:44.280 03:57:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:44.280 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 03:57:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.280 03:57:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.280 03:57:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.280 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 [2024-07-14 03:57:02.949531] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.280 03:57:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.280 03:57:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:44.280 03:57:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:44.280 03:57:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:44.280 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 03:57:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:44.280 03:57:02 -- target/shutdown.sh@28 -- # cat 00:24:44.280 03:57:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:44.280 03:57:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:44.280 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:24:44.280 Malloc1 00:24:44.280 [2024-07-14 03:57:03.034698] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.280 Malloc2 00:24:44.280 Malloc3 00:24:44.280 Malloc4 00:24:44.280 Malloc5 00:24:44.537 Malloc6 00:24:44.537 Malloc7 00:24:44.537 Malloc8 00:24:44.537 Malloc9 00:24:44.537 Malloc10 00:24:44.537 03:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:44.537 03:57:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:44.537 03:57:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:44.537 03:57:03 -- common/autotest_common.sh@10 -- # set +x 00:24:44.795 03:57:03 -- target/shutdown.sh@78 -- # perfpid=2453754 00:24:44.795 03:57:03 -- target/shutdown.sh@79 -- # waitforlisten 2453754 /var/tmp/bdevperf.sock 00:24:44.795 03:57:03 -- common/autotest_common.sh@819 -- # '[' -z 2453754 ']' 00:24:44.795 03:57:03 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:44.795 03:57:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.795 03:57:03 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:44.795 03:57:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:44.795 03:57:03 -- nvmf/common.sh@520 -- # config=() 00:24:44.795 03:57:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.795 03:57:03 -- nvmf/common.sh@520 -- # local subsystem config 00:24:44.795 03:57:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:44.795 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.795 03:57:03 -- common/autotest_common.sh@10 -- # set +x 00:24:44.795 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.795 { 00:24:44.795 "params": { 00:24:44.795 "name": "Nvme$subsystem", 00:24:44.795 "trtype": "$TEST_TRANSPORT", 00:24:44.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.795 "adrfam": "ipv4", 00:24:44.795 "trsvcid": "$NVMF_PORT", 00:24:44.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.795 "hdgst": ${hdgst:-false}, 00:24:44.795 "ddgst": ${ddgst:-false} 00:24:44.795 }, 00:24:44.795 "method": "bdev_nvme_attach_controller" 00:24:44.795 } 00:24:44.795 EOF 00:24:44.795 )") 00:24:44.795 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.795 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.795 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.795 { 00:24:44.795 "params": { 00:24:44.795 "name": "Nvme$subsystem", 00:24:44.795 "trtype": "$TEST_TRANSPORT", 00:24:44.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.795 "adrfam": "ipv4", 00:24:44.795 "trsvcid": "$NVMF_PORT", 00:24:44.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.795 "hdgst": ${hdgst:-false}, 00:24:44.795 "ddgst": ${ddgst:-false} 00:24:44.795 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:44.796 { 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme$subsystem", 00:24:44.796 "trtype": "$TEST_TRANSPORT", 00:24:44.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "$NVMF_PORT", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.796 "hdgst": ${hdgst:-false}, 00:24:44.796 "ddgst": ${ddgst:-false} 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 } 00:24:44.796 EOF 00:24:44.796 )") 00:24:44.796 03:57:03 -- nvmf/common.sh@542 -- # cat 00:24:44.796 03:57:03 -- nvmf/common.sh@544 -- # jq . 00:24:44.796 03:57:03 -- nvmf/common.sh@545 -- # IFS=, 00:24:44.796 03:57:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme1", 00:24:44.796 "trtype": "tcp", 00:24:44.796 "traddr": "10.0.0.2", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "4420", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:44.796 "hdgst": false, 00:24:44.796 "ddgst": false 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 },{ 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme2", 00:24:44.796 "trtype": "tcp", 00:24:44.796 "traddr": "10.0.0.2", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "4420", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:44.796 "hdgst": false, 00:24:44.796 "ddgst": false 00:24:44.796 }, 00:24:44.796 "method": "bdev_nvme_attach_controller" 00:24:44.796 },{ 00:24:44.796 "params": { 00:24:44.796 "name": "Nvme3", 00:24:44.796 "trtype": "tcp", 00:24:44.796 "traddr": "10.0.0.2", 00:24:44.796 "adrfam": "ipv4", 00:24:44.796 "trsvcid": "4420", 00:24:44.796 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:44.796 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:44.796 "hdgst": false, 00:24:44.796 "ddgst": false 00:24:44.796 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 },{ 00:24:44.797 "params": { 00:24:44.797 "name": "Nvme4", 00:24:44.797 "trtype": "tcp", 00:24:44.797 "traddr": "10.0.0.2", 00:24:44.797 "adrfam": "ipv4", 00:24:44.797 "trsvcid": "4420", 00:24:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:44.797 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:44.797 "hdgst": false, 00:24:44.797 "ddgst": false 00:24:44.797 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 },{ 00:24:44.797 "params": { 00:24:44.797 "name": "Nvme5", 00:24:44.797 "trtype": "tcp", 00:24:44.797 "traddr": "10.0.0.2", 00:24:44.797 "adrfam": "ipv4", 00:24:44.797 "trsvcid": "4420", 00:24:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:44.797 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:44.797 "hdgst": false, 00:24:44.797 "ddgst": false 00:24:44.797 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 },{ 00:24:44.797 "params": { 00:24:44.797 "name": "Nvme6", 00:24:44.797 "trtype": "tcp", 00:24:44.797 "traddr": "10.0.0.2", 00:24:44.797 "adrfam": "ipv4", 00:24:44.797 "trsvcid": "4420", 00:24:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:44.797 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:44.797 "hdgst": false, 00:24:44.797 "ddgst": false 00:24:44.797 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 },{ 00:24:44.797 "params": { 00:24:44.797 "name": "Nvme7", 00:24:44.797 "trtype": "tcp", 00:24:44.797 "traddr": "10.0.0.2", 00:24:44.797 "adrfam": "ipv4", 00:24:44.797 "trsvcid": "4420", 00:24:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:44.797 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:44.797 "hdgst": false, 00:24:44.797 "ddgst": false 00:24:44.797 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 },{ 00:24:44.797 "params": { 00:24:44.797 "name": "Nvme8", 00:24:44.797 "trtype": "tcp", 00:24:44.797 "traddr": "10.0.0.2", 00:24:44.797 "adrfam": "ipv4", 00:24:44.797 "trsvcid": "4420", 00:24:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:44.797 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:44.797 "hdgst": false, 00:24:44.797 "ddgst": false 00:24:44.797 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 },{ 00:24:44.797 "params": { 00:24:44.797 "name": "Nvme9", 00:24:44.797 "trtype": "tcp", 00:24:44.797 "traddr": "10.0.0.2", 00:24:44.797 "adrfam": "ipv4", 00:24:44.797 "trsvcid": "4420", 00:24:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:44.797 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:44.797 "hdgst": false, 00:24:44.797 "ddgst": false 00:24:44.797 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 },{ 00:24:44.797 "params": { 00:24:44.797 "name": "Nvme10", 00:24:44.797 "trtype": "tcp", 00:24:44.797 "traddr": "10.0.0.2", 00:24:44.797 "adrfam": "ipv4", 00:24:44.797 "trsvcid": "4420", 00:24:44.797 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:44.797 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:44.797 "hdgst": false, 00:24:44.797 "ddgst": false 00:24:44.797 }, 00:24:44.797 "method": "bdev_nvme_attach_controller" 00:24:44.797 }' 00:24:44.797 [2024-07-14 03:57:03.543253] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:44.797 [2024-07-14 03:57:03.543324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:44.797 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.797 [2024-07-14 03:57:03.606624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.797 [2024-07-14 03:57:03.690087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.321 03:57:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:47.321 03:57:05 -- common/autotest_common.sh@852 -- # return 0 00:24:47.321 03:57:05 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:47.321 03:57:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:47.321 03:57:05 -- common/autotest_common.sh@10 -- # set +x 00:24:47.321 03:57:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:47.321 03:57:05 -- target/shutdown.sh@83 -- # kill -9 2453754 00:24:47.321 03:57:05 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:47.321 03:57:05 -- target/shutdown.sh@87 -- # sleep 1 00:24:48.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2453754 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:48.254 03:57:06 -- target/shutdown.sh@88 -- # kill -0 2453447 00:24:48.254 03:57:07 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:48.254 03:57:07 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:48.254 03:57:07 -- nvmf/common.sh@520 -- # config=() 00:24:48.254 03:57:07 -- nvmf/common.sh@520 -- # local subsystem config 00:24:48.254 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.254 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.254 { 00:24:48.254 "params": { 00:24:48.254 "name": "Nvme$subsystem", 00:24:48.254 "trtype": "$TEST_TRANSPORT", 00:24:48.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.254 "adrfam": "ipv4", 00:24:48.254 "trsvcid": "$NVMF_PORT", 00:24:48.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.254 "hdgst": ${hdgst:-false}, 00:24:48.254 "ddgst": ${ddgst:-false} 00:24:48.254 }, 00:24:48.254 "method": "bdev_nvme_attach_controller" 00:24:48.254 } 00:24:48.254 EOF 00:24:48.254 )") 00:24:48.254 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.254 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.254 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.254 { 00:24:48.254 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:48.255 { 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme$subsystem", 00:24:48.255 "trtype": "$TEST_TRANSPORT", 00:24:48.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "$NVMF_PORT", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.255 "hdgst": ${hdgst:-false}, 00:24:48.255 "ddgst": ${ddgst:-false} 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 } 00:24:48.255 EOF 00:24:48.255 )") 00:24:48.255 03:57:07 -- nvmf/common.sh@542 -- # cat 00:24:48.255 03:57:07 -- nvmf/common.sh@544 -- # jq . 00:24:48.255 03:57:07 -- nvmf/common.sh@545 -- # IFS=, 00:24:48.255 03:57:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme1", 00:24:48.255 "trtype": "tcp", 00:24:48.255 "traddr": "10.0.0.2", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "4420", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.255 "hdgst": false, 00:24:48.255 "ddgst": false 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 },{ 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme2", 00:24:48.255 "trtype": "tcp", 00:24:48.255 "traddr": "10.0.0.2", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "4420", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:48.255 "hdgst": false, 00:24:48.255 "ddgst": false 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 },{ 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme3", 00:24:48.255 "trtype": "tcp", 00:24:48.255 "traddr": "10.0.0.2", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "4420", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:48.255 "hdgst": false, 00:24:48.255 "ddgst": false 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 },{ 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme4", 00:24:48.255 "trtype": "tcp", 00:24:48.255 "traddr": "10.0.0.2", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "4420", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:48.255 "hdgst": false, 00:24:48.255 "ddgst": false 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 },{ 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme5", 00:24:48.255 "trtype": "tcp", 00:24:48.255 "traddr": "10.0.0.2", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "4420", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:48.255 "hdgst": false, 00:24:48.255 "ddgst": false 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 },{ 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme6", 00:24:48.255 "trtype": "tcp", 00:24:48.255 "traddr": "10.0.0.2", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "4420", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:48.255 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:48.255 "hdgst": false, 00:24:48.255 "ddgst": false 00:24:48.255 }, 00:24:48.255 "method": "bdev_nvme_attach_controller" 00:24:48.255 },{ 00:24:48.255 "params": { 00:24:48.255 "name": "Nvme7", 00:24:48.255 "trtype": "tcp", 00:24:48.255 "traddr": "10.0.0.2", 00:24:48.255 "adrfam": "ipv4", 00:24:48.255 "trsvcid": "4420", 00:24:48.255 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:48.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:48.256 "hdgst": false, 00:24:48.256 "ddgst": false 00:24:48.256 }, 00:24:48.256 "method": "bdev_nvme_attach_controller" 00:24:48.256 },{ 00:24:48.256 "params": { 00:24:48.256 "name": "Nvme8", 00:24:48.256 "trtype": "tcp", 00:24:48.256 "traddr": "10.0.0.2", 00:24:48.256 "adrfam": "ipv4", 00:24:48.256 "trsvcid": "4420", 00:24:48.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:48.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:48.256 "hdgst": false, 00:24:48.256 "ddgst": false 00:24:48.256 }, 00:24:48.256 "method": "bdev_nvme_attach_controller" 00:24:48.256 },{ 00:24:48.256 "params": { 00:24:48.256 "name": "Nvme9", 00:24:48.256 "trtype": "tcp", 00:24:48.256 "traddr": "10.0.0.2", 00:24:48.256 "adrfam": "ipv4", 00:24:48.256 "trsvcid": "4420", 00:24:48.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:48.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:48.256 "hdgst": false, 00:24:48.256 "ddgst": false 00:24:48.256 }, 00:24:48.256 "method": "bdev_nvme_attach_controller" 00:24:48.256 },{ 00:24:48.256 "params": { 00:24:48.256 "name": "Nvme10", 00:24:48.256 "trtype": "tcp", 00:24:48.256 "traddr": "10.0.0.2", 00:24:48.256 "adrfam": "ipv4", 00:24:48.256 "trsvcid": "4420", 00:24:48.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:48.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:48.256 "hdgst": false, 00:24:48.256 "ddgst": false 00:24:48.256 }, 00:24:48.256 "method": "bdev_nvme_attach_controller" 00:24:48.256 }' 00:24:48.256 [2024-07-14 03:57:07.044983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:48.256 [2024-07-14 03:57:07.045070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454192 ] 00:24:48.256 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.256 [2024-07-14 03:57:07.111524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.514 [2024-07-14 03:57:07.198700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.888 Running I/O for 1 seconds... 00:24:50.823 00:24:50.823 Latency(us) 00:24:50.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.823 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.823 Verification LBA range: start 0x0 length 0x400 00:24:50.823 Nvme1n1 : 1.07 331.85 20.74 0.00 0.00 187778.52 34564.17 182529.90 00:24:50.823 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.823 Verification LBA range: start 0x0 length 0x400 00:24:50.823 Nvme2n1 : 1.07 377.88 23.62 0.00 0.00 164559.33 20971.52 153014.42 00:24:50.823 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.823 Verification LBA range: start 0x0 length 0x400 00:24:50.823 Nvme3n1 : 1.09 366.04 22.88 0.00 0.00 169279.49 12913.02 158451.48 00:24:50.823 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.823 Verification LBA range: start 0x0 length 0x400 00:24:50.823 Nvme4n1 : 1.10 392.86 24.55 0.00 0.00 156893.45 17282.09 142140.30 00:24:50.823 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.823 Verification LBA range: start 0x0 length 0x400 00:24:50.824 Nvme5n1 : 1.11 391.57 24.47 0.00 0.00 156311.36 17282.09 125052.40 00:24:50.824 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.824 Verification LBA range: start 0x0 length 0x400 00:24:50.824 Nvme6n1 : 1.11 392.45 24.53 0.00 0.00 154954.23 15825.73 136703.24 00:24:50.824 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.824 Verification LBA range: start 0x0 length 0x400 00:24:50.824 Nvme7n1 : 1.10 362.39 22.65 0.00 0.00 165665.45 25631.86 129712.73 00:24:50.824 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.824 Verification LBA range: start 0x0 length 0x400 00:24:50.824 Nvme8n1 : 1.11 392.26 24.52 0.00 0.00 152715.88 16893.72 126605.84 00:24:50.824 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.824 Verification LBA range: start 0x0 length 0x400 00:24:50.824 Nvme9n1 : 1.12 389.12 24.32 0.00 0.00 153310.95 13495.56 129712.73 00:24:50.824 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:50.824 Verification LBA range: start 0x0 length 0x400 00:24:50.824 Nvme10n1 : 1.11 357.69 22.36 0.00 0.00 164253.00 31845.64 141363.58 00:24:50.824 =================================================================================================================== 00:24:50.824 Total : 3754.11 234.63 0.00 0.00 161975.87 12913.02 182529.90 00:24:51.082 03:57:09 -- target/shutdown.sh@93 -- # stoptarget 00:24:51.082 03:57:09 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:51.082 03:57:09 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:51.082 03:57:09 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:51.082 03:57:09 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:51.082 03:57:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:51.082 03:57:09 -- nvmf/common.sh@116 -- # sync 00:24:51.082 03:57:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:51.082 03:57:09 -- nvmf/common.sh@119 -- # set +e 00:24:51.082 03:57:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:51.082 03:57:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:51.082 rmmod nvme_tcp 00:24:51.082 rmmod nvme_fabrics 00:24:51.082 rmmod nvme_keyring 00:24:51.082 03:57:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:51.082 03:57:10 -- nvmf/common.sh@123 -- # set -e 00:24:51.082 03:57:10 -- nvmf/common.sh@124 -- # return 0 00:24:51.082 03:57:10 -- nvmf/common.sh@477 -- # '[' -n 2453447 ']' 00:24:51.082 03:57:10 -- nvmf/common.sh@478 -- # killprocess 2453447 00:24:51.082 03:57:10 -- common/autotest_common.sh@926 -- # '[' -z 2453447 ']' 00:24:51.082 03:57:10 -- common/autotest_common.sh@930 -- # kill -0 2453447 00:24:51.082 03:57:10 -- common/autotest_common.sh@931 -- # uname 00:24:51.082 03:57:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:51.082 03:57:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2453447 00:24:51.340 03:57:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:51.340 03:57:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:51.340 03:57:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2453447' 00:24:51.340 killing process with pid 2453447 00:24:51.340 03:57:10 -- common/autotest_common.sh@945 -- # kill 2453447 00:24:51.340 03:57:10 -- common/autotest_common.sh@950 -- # wait 2453447 00:24:51.906 03:57:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:51.906 03:57:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:51.906 03:57:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:51.906 03:57:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.906 03:57:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:51.906 03:57:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.906 03:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.906 03:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.812 03:57:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:53.812 00:24:53.812 real 0m12.697s 00:24:53.812 user 0m38.417s 00:24:53.812 sys 0m3.467s 00:24:53.812 03:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.812 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:24:53.812 ************************************ 00:24:53.812 END TEST nvmf_shutdown_tc1 00:24:53.812 ************************************ 00:24:53.813 03:57:12 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:53.813 03:57:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:53.813 03:57:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:53.813 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:24:53.813 ************************************ 00:24:53.813 START TEST nvmf_shutdown_tc2 00:24:53.813 ************************************ 00:24:53.813 03:57:12 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:24:53.813 03:57:12 -- target/shutdown.sh@98 -- # starttarget 00:24:53.813 03:57:12 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:53.813 03:57:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:53.813 03:57:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.813 03:57:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:53.813 03:57:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:53.813 03:57:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:53.813 03:57:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.813 03:57:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.813 03:57:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.813 03:57:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:53.813 03:57:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:53.813 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:24:53.813 03:57:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:53.813 03:57:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:53.813 03:57:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:53.813 03:57:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:53.813 03:57:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:53.813 03:57:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:53.813 03:57:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:53.813 03:57:12 -- nvmf/common.sh@294 -- # net_devs=() 00:24:53.813 03:57:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:53.813 03:57:12 -- nvmf/common.sh@295 -- # e810=() 00:24:53.813 03:57:12 -- nvmf/common.sh@295 -- # local -ga e810 00:24:53.813 03:57:12 -- nvmf/common.sh@296 -- # x722=() 00:24:53.813 03:57:12 -- nvmf/common.sh@296 -- # local -ga x722 00:24:53.813 03:57:12 -- nvmf/common.sh@297 -- # mlx=() 00:24:53.813 03:57:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:53.813 03:57:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.813 03:57:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:53.813 03:57:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:53.813 03:57:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:53.813 03:57:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:53.813 03:57:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:53.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:53.813 03:57:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:53.813 03:57:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:53.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:53.813 03:57:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:53.813 03:57:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:53.813 03:57:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.813 03:57:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:53.813 03:57:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.813 03:57:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:53.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:53.813 03:57:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.813 03:57:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:53.813 03:57:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.813 03:57:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:53.813 03:57:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.813 03:57:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:53.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:53.813 03:57:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.813 03:57:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:53.813 03:57:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:53.813 03:57:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:53.813 03:57:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.813 03:57:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.813 03:57:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.813 03:57:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:53.813 03:57:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.813 03:57:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.813 03:57:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:53.813 03:57:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.813 03:57:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.813 03:57:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:53.813 03:57:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:53.813 03:57:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.813 03:57:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.813 03:57:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.813 03:57:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.813 03:57:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:53.813 03:57:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.813 03:57:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.813 03:57:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.813 03:57:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:53.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:24:53.813 00:24:53.813 --- 10.0.0.2 ping statistics --- 00:24:53.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.813 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:53.813 03:57:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:24:53.813 00:24:53.813 --- 10.0.0.1 ping statistics --- 00:24:53.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.813 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:53.813 03:57:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.813 03:57:12 -- nvmf/common.sh@410 -- # return 0 00:24:53.813 03:57:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:53.813 03:57:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.813 03:57:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:53.813 03:57:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.813 03:57:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:53.813 03:57:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:54.071 03:57:12 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:54.071 03:57:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:54.071 03:57:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:54.071 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:24:54.071 03:57:12 -- nvmf/common.sh@469 -- # nvmfpid=2455484 00:24:54.071 03:57:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:54.071 03:57:12 -- nvmf/common.sh@470 -- # waitforlisten 2455484 00:24:54.071 03:57:12 -- common/autotest_common.sh@819 -- # '[' -z 2455484 ']' 00:24:54.071 03:57:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.071 03:57:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:54.071 03:57:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.071 03:57:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:54.071 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:24:54.071 [2024-07-14 03:57:12.820298] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:54.071 [2024-07-14 03:57:12.820374] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.071 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.071 [2024-07-14 03:57:12.892728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.071 [2024-07-14 03:57:12.982143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:54.071 [2024-07-14 03:57:12.982308] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.071 [2024-07-14 03:57:12.982335] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.071 [2024-07-14 03:57:12.982351] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.071 [2024-07-14 03:57:12.982435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.071 [2024-07-14 03:57:12.982547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.071 [2024-07-14 03:57:12.982614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.071 [2024-07-14 03:57:12.982616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.000 03:57:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:55.000 03:57:13 -- common/autotest_common.sh@852 -- # return 0 00:24:55.000 03:57:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:55.000 03:57:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.000 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:24:55.000 03:57:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.000 03:57:13 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.000 03:57:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.000 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:24:55.000 [2024-07-14 03:57:13.764357] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.000 03:57:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.000 03:57:13 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:55.000 03:57:13 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:55.000 03:57:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.000 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:24:55.000 03:57:13 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:55.000 03:57:13 -- target/shutdown.sh@28 -- # cat 00:24:55.000 03:57:13 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:55.000 03:57:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.000 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:24:55.000 Malloc1 00:24:55.000 [2024-07-14 03:57:13.849564] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.000 Malloc2 00:24:55.000 Malloc3 00:24:55.257 Malloc4 00:24:55.257 Malloc5 00:24:55.257 Malloc6 00:24:55.257 Malloc7 00:24:55.257 Malloc8 00:24:55.515 Malloc9 00:24:55.515 Malloc10 00:24:55.515 03:57:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.515 03:57:14 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:55.515 03:57:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.515 03:57:14 -- common/autotest_common.sh@10 -- # set +x 00:24:55.515 03:57:14 -- target/shutdown.sh@102 -- # perfpid=2455673 00:24:55.515 03:57:14 -- target/shutdown.sh@103 -- # waitforlisten 2455673 /var/tmp/bdevperf.sock 00:24:55.515 03:57:14 -- common/autotest_common.sh@819 -- # '[' -z 2455673 ']' 00:24:55.515 03:57:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.515 03:57:14 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:55.515 03:57:14 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:55.515 03:57:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.515 03:57:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.515 03:57:14 -- nvmf/common.sh@520 -- # config=() 00:24:55.515 03:57:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.515 03:57:14 -- nvmf/common.sh@520 -- # local subsystem config 00:24:55.515 03:57:14 -- common/autotest_common.sh@10 -- # set +x 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.515 { 00:24:55.515 "params": { 00:24:55.515 "name": "Nvme$subsystem", 00:24:55.515 "trtype": "$TEST_TRANSPORT", 00:24:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.515 "adrfam": "ipv4", 00:24:55.515 "trsvcid": "$NVMF_PORT", 00:24:55.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.515 "hdgst": ${hdgst:-false}, 00:24:55.515 "ddgst": ${ddgst:-false} 00:24:55.515 }, 00:24:55.515 "method": "bdev_nvme_attach_controller" 00:24:55.515 } 00:24:55.515 EOF 00:24:55.515 )") 00:24:55.515 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.515 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.516 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.516 { 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme$subsystem", 00:24:55.516 "trtype": "$TEST_TRANSPORT", 00:24:55.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "$NVMF_PORT", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.516 "hdgst": ${hdgst:-false}, 00:24:55.516 "ddgst": ${ddgst:-false} 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 } 00:24:55.516 EOF 00:24:55.516 )") 00:24:55.516 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.516 03:57:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.516 03:57:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.516 { 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme$subsystem", 00:24:55.516 "trtype": "$TEST_TRANSPORT", 00:24:55.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "$NVMF_PORT", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.516 "hdgst": ${hdgst:-false}, 00:24:55.516 "ddgst": ${ddgst:-false} 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 } 00:24:55.516 EOF 00:24:55.516 )") 00:24:55.516 03:57:14 -- nvmf/common.sh@542 -- # cat 00:24:55.516 03:57:14 -- nvmf/common.sh@544 -- # jq . 00:24:55.516 03:57:14 -- nvmf/common.sh@545 -- # IFS=, 00:24:55.516 03:57:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme1", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme2", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme3", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme4", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme5", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme6", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme7", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme8", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme9", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 },{ 00:24:55.516 "params": { 00:24:55.516 "name": "Nvme10", 00:24:55.516 "trtype": "tcp", 00:24:55.516 "traddr": "10.0.0.2", 00:24:55.516 "adrfam": "ipv4", 00:24:55.516 "trsvcid": "4420", 00:24:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:55.516 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:55.516 "hdgst": false, 00:24:55.516 "ddgst": false 00:24:55.516 }, 00:24:55.516 "method": "bdev_nvme_attach_controller" 00:24:55.516 }' 00:24:55.516 [2024-07-14 03:57:14.358057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:55.516 [2024-07-14 03:57:14.358134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455673 ] 00:24:55.516 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.516 [2024-07-14 03:57:14.424452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.775 [2024-07-14 03:57:14.512433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.177 Running I/O for 10 seconds... 00:24:57.177 03:57:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:57.177 03:57:16 -- common/autotest_common.sh@852 -- # return 0 00:24:57.177 03:57:16 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:57.177 03:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.177 03:57:16 -- common/autotest_common.sh@10 -- # set +x 00:24:57.437 03:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.437 03:57:16 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:57.437 03:57:16 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:57.437 03:57:16 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:57.437 03:57:16 -- target/shutdown.sh@57 -- # local ret=1 00:24:57.437 03:57:16 -- target/shutdown.sh@58 -- # local i 00:24:57.437 03:57:16 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:57.437 03:57:16 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:57.437 03:57:16 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.437 03:57:16 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.437 03:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.437 03:57:16 -- common/autotest_common.sh@10 -- # set +x 00:24:57.437 03:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.437 03:57:16 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:57.437 03:57:16 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:57.437 03:57:16 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:57.696 03:57:16 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:57.696 03:57:16 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:57.696 03:57:16 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.696 03:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.696 03:57:16 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.696 03:57:16 -- common/autotest_common.sh@10 -- # set +x 00:24:57.696 03:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.696 03:57:16 -- target/shutdown.sh@60 -- # read_io_count=87 00:24:57.696 03:57:16 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:24:57.696 03:57:16 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:57.955 03:57:16 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:57.955 03:57:16 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:57.955 03:57:16 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:57.955 03:57:16 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:57.955 03:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.955 03:57:16 -- common/autotest_common.sh@10 -- # set +x 00:24:57.955 03:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.955 03:57:16 -- target/shutdown.sh@60 -- # read_io_count=211 00:24:57.955 03:57:16 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:24:57.955 03:57:16 -- target/shutdown.sh@64 -- # ret=0 00:24:57.955 03:57:16 -- target/shutdown.sh@65 -- # break 00:24:57.955 03:57:16 -- target/shutdown.sh@69 -- # return 0 00:24:57.955 03:57:16 -- target/shutdown.sh@109 -- # killprocess 2455673 00:24:57.955 03:57:16 -- common/autotest_common.sh@926 -- # '[' -z 2455673 ']' 00:24:57.955 03:57:16 -- common/autotest_common.sh@930 -- # kill -0 2455673 00:24:57.955 03:57:16 -- common/autotest_common.sh@931 -- # uname 00:24:57.955 03:57:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:57.955 03:57:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2455673 00:24:57.955 03:57:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:57.955 03:57:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:57.955 03:57:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2455673' 00:24:57.955 killing process with pid 2455673 00:24:57.955 03:57:16 -- common/autotest_common.sh@945 -- # kill 2455673 00:24:57.955 03:57:16 -- common/autotest_common.sh@950 -- # wait 2455673 00:24:57.955 Received shutdown signal, test time was about 0.779075 seconds 00:24:57.955 00:24:57.955 Latency(us) 00:24:57.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.955 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme1n1 : 0.75 361.11 22.57 0.00 0.00 172438.39 22524.97 176316.11 00:24:57.955 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme2n1 : 0.76 416.52 26.03 0.00 0.00 147595.86 27573.67 115731.72 00:24:57.955 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme3n1 : 0.77 413.79 25.86 0.00 0.00 148349.50 6140.97 161558.38 00:24:57.955 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme4n1 : 0.75 419.14 26.20 0.00 0.00 143862.04 24855.13 114178.28 00:24:57.955 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme5n1 : 0.77 410.24 25.64 0.00 0.00 145264.45 27379.48 114178.28 00:24:57.955 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme6n1 : 0.76 422.36 26.40 0.00 0.00 139115.88 7136.14 118838.61 00:24:57.955 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme7n1 : 0.75 364.35 22.77 0.00 0.00 159984.41 22913.33 132042.90 00:24:57.955 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme8n1 : 0.77 409.77 25.61 0.00 0.00 140807.85 24563.86 115731.72 00:24:57.955 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme9n1 : 0.78 404.75 25.30 0.00 0.00 142704.27 12136.30 121945.51 00:24:57.955 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:57.955 Verification LBA range: start 0x0 length 0x400 00:24:57.955 Nvme10n1 : 0.77 419.41 26.21 0.00 0.00 135206.91 6456.51 118838.61 00:24:57.955 =================================================================================================================== 00:24:57.955 Total : 4041.44 252.59 0.00 0.00 146962.34 6140.97 176316.11 00:24:58.214 03:57:17 -- target/shutdown.sh@112 -- # sleep 1 00:24:59.588 03:57:18 -- target/shutdown.sh@113 -- # kill -0 2455484 00:24:59.588 03:57:18 -- target/shutdown.sh@115 -- # stoptarget 00:24:59.588 03:57:18 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:59.588 03:57:18 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:59.588 03:57:18 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:59.588 03:57:18 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:59.588 03:57:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:59.588 03:57:18 -- nvmf/common.sh@116 -- # sync 00:24:59.588 03:57:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:59.588 03:57:18 -- nvmf/common.sh@119 -- # set +e 00:24:59.588 03:57:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:59.588 03:57:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:59.588 rmmod nvme_tcp 00:24:59.588 rmmod nvme_fabrics 00:24:59.588 rmmod nvme_keyring 00:24:59.588 03:57:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:59.588 03:57:18 -- nvmf/common.sh@123 -- # set -e 00:24:59.588 03:57:18 -- nvmf/common.sh@124 -- # return 0 00:24:59.588 03:57:18 -- nvmf/common.sh@477 -- # '[' -n 2455484 ']' 00:24:59.588 03:57:18 -- nvmf/common.sh@478 -- # killprocess 2455484 00:24:59.588 03:57:18 -- common/autotest_common.sh@926 -- # '[' -z 2455484 ']' 00:24:59.588 03:57:18 -- common/autotest_common.sh@930 -- # kill -0 2455484 00:24:59.588 03:57:18 -- common/autotest_common.sh@931 -- # uname 00:24:59.588 03:57:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:59.588 03:57:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2455484 00:24:59.588 03:57:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:59.588 03:57:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:59.588 03:57:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2455484' 00:24:59.588 killing process with pid 2455484 00:24:59.588 03:57:18 -- common/autotest_common.sh@945 -- # kill 2455484 00:24:59.588 03:57:18 -- common/autotest_common.sh@950 -- # wait 2455484 00:24:59.847 03:57:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:59.847 03:57:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:59.847 03:57:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:59.847 03:57:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.847 03:57:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:59.847 03:57:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.847 03:57:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.847 03:57:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.382 03:57:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:02.382 00:25:02.382 real 0m8.135s 00:25:02.382 user 0m25.452s 00:25:02.382 sys 0m1.469s 00:25:02.382 03:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.382 03:57:20 -- common/autotest_common.sh@10 -- # set +x 00:25:02.382 ************************************ 00:25:02.382 END TEST nvmf_shutdown_tc2 00:25:02.382 ************************************ 00:25:02.382 03:57:20 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:02.382 03:57:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.382 03:57:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.382 03:57:20 -- common/autotest_common.sh@10 -- # set +x 00:25:02.382 ************************************ 00:25:02.382 START TEST nvmf_shutdown_tc3 00:25:02.382 ************************************ 00:25:02.382 03:57:20 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:02.382 03:57:20 -- target/shutdown.sh@120 -- # starttarget 00:25:02.382 03:57:20 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:02.382 03:57:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:02.382 03:57:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.382 03:57:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:02.382 03:57:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:02.382 03:57:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:02.382 03:57:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.382 03:57:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.382 03:57:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.382 03:57:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:02.382 03:57:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:02.382 03:57:20 -- common/autotest_common.sh@10 -- # set +x 00:25:02.382 03:57:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:02.382 03:57:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:02.382 03:57:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:02.382 03:57:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:02.382 03:57:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:02.382 03:57:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:02.382 03:57:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:02.382 03:57:20 -- nvmf/common.sh@294 -- # net_devs=() 00:25:02.382 03:57:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:02.382 03:57:20 -- nvmf/common.sh@295 -- # e810=() 00:25:02.382 03:57:20 -- nvmf/common.sh@295 -- # local -ga e810 00:25:02.382 03:57:20 -- nvmf/common.sh@296 -- # x722=() 00:25:02.382 03:57:20 -- nvmf/common.sh@296 -- # local -ga x722 00:25:02.382 03:57:20 -- nvmf/common.sh@297 -- # mlx=() 00:25:02.382 03:57:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:02.382 03:57:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.382 03:57:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:02.382 03:57:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:02.382 03:57:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:02.382 03:57:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.382 03:57:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:02.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:02.382 03:57:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.382 03:57:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:02.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:02.382 03:57:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:02.382 03:57:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.382 03:57:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.382 03:57:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.382 03:57:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.382 03:57:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:02.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:02.382 03:57:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.382 03:57:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.382 03:57:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.382 03:57:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.382 03:57:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.382 03:57:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:02.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:02.382 03:57:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.382 03:57:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:02.382 03:57:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:02.382 03:57:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:02.382 03:57:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:02.382 03:57:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.382 03:57:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.382 03:57:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.382 03:57:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:02.382 03:57:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.382 03:57:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.382 03:57:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:02.383 03:57:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.383 03:57:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.383 03:57:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:02.383 03:57:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:02.383 03:57:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.383 03:57:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.383 03:57:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.383 03:57:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.383 03:57:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:02.383 03:57:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.383 03:57:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.383 03:57:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.383 03:57:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:02.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:25:02.383 00:25:02.383 --- 10.0.0.2 ping statistics --- 00:25:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.383 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:25:02.383 03:57:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:25:02.383 00:25:02.383 --- 10.0.0.1 ping statistics --- 00:25:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.383 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:25:02.383 03:57:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.383 03:57:20 -- nvmf/common.sh@410 -- # return 0 00:25:02.383 03:57:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.383 03:57:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.383 03:57:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.383 03:57:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.383 03:57:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.383 03:57:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.383 03:57:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.383 03:57:20 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:02.383 03:57:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:02.383 03:57:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:02.383 03:57:20 -- common/autotest_common.sh@10 -- # set +x 00:25:02.383 03:57:20 -- nvmf/common.sh@469 -- # nvmfpid=2456613 00:25:02.383 03:57:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:02.383 03:57:20 -- nvmf/common.sh@470 -- # waitforlisten 2456613 00:25:02.383 03:57:20 -- common/autotest_common.sh@819 -- # '[' -z 2456613 ']' 00:25:02.383 03:57:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.383 03:57:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.383 03:57:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.383 03:57:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.383 03:57:20 -- common/autotest_common.sh@10 -- # set +x 00:25:02.383 [2024-07-14 03:57:21.016729] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:02.383 [2024-07-14 03:57:21.016807] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.383 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.383 [2024-07-14 03:57:21.085214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.383 [2024-07-14 03:57:21.174432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:02.383 [2024-07-14 03:57:21.174572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.383 [2024-07-14 03:57:21.174589] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.383 [2024-07-14 03:57:21.174603] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.383 [2024-07-14 03:57:21.174685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.383 [2024-07-14 03:57:21.174783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.383 [2024-07-14 03:57:21.174872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:02.383 [2024-07-14 03:57:21.174875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.317 03:57:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:03.317 03:57:21 -- common/autotest_common.sh@852 -- # return 0 00:25:03.317 03:57:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:03.317 03:57:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.317 03:57:21 -- common/autotest_common.sh@10 -- # set +x 00:25:03.317 03:57:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.317 03:57:21 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.317 03:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.317 03:57:21 -- common/autotest_common.sh@10 -- # set +x 00:25:03.318 [2024-07-14 03:57:21.984473] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.318 03:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.318 03:57:21 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:03.318 03:57:21 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:03.318 03:57:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.318 03:57:21 -- common/autotest_common.sh@10 -- # set +x 00:25:03.318 03:57:21 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:03.318 03:57:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:21 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.318 03:57:22 -- target/shutdown.sh@28 -- # cat 00:25:03.318 03:57:22 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:03.318 03:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.318 03:57:22 -- common/autotest_common.sh@10 -- # set +x 00:25:03.318 Malloc1 00:25:03.318 [2024-07-14 03:57:22.073600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.318 Malloc2 00:25:03.318 Malloc3 00:25:03.318 Malloc4 00:25:03.318 Malloc5 00:25:03.576 Malloc6 00:25:03.576 Malloc7 00:25:03.576 Malloc8 00:25:03.576 Malloc9 00:25:03.576 Malloc10 00:25:03.836 03:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.836 03:57:22 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:03.836 03:57:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.836 03:57:22 -- common/autotest_common.sh@10 -- # set +x 00:25:03.836 03:57:22 -- target/shutdown.sh@124 -- # perfpid=2456925 00:25:03.836 03:57:22 -- target/shutdown.sh@125 -- # waitforlisten 2456925 /var/tmp/bdevperf.sock 00:25:03.836 03:57:22 -- common/autotest_common.sh@819 -- # '[' -z 2456925 ']' 00:25:03.836 03:57:22 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:03.836 03:57:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.836 03:57:22 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:03.836 03:57:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.836 03:57:22 -- nvmf/common.sh@520 -- # config=() 00:25:03.836 03:57:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.836 03:57:22 -- nvmf/common.sh@520 -- # local subsystem config 00:25:03.836 03:57:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- common/autotest_common.sh@10 -- # set +x 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.836 "ddgst": ${ddgst:-false} 00:25:03.836 }, 00:25:03.836 "method": "bdev_nvme_attach_controller" 00:25:03.836 } 00:25:03.836 EOF 00:25:03.836 )") 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.836 "ddgst": ${ddgst:-false} 00:25:03.836 }, 00:25:03.836 "method": "bdev_nvme_attach_controller" 00:25:03.836 } 00:25:03.836 EOF 00:25:03.836 )") 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.836 "ddgst": ${ddgst:-false} 00:25:03.836 }, 00:25:03.836 "method": "bdev_nvme_attach_controller" 00:25:03.836 } 00:25:03.836 EOF 00:25:03.836 )") 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.836 "ddgst": ${ddgst:-false} 00:25:03.836 }, 00:25:03.836 "method": "bdev_nvme_attach_controller" 00:25:03.836 } 00:25:03.836 EOF 00:25:03.836 )") 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.836 "ddgst": ${ddgst:-false} 00:25:03.836 }, 00:25:03.836 "method": "bdev_nvme_attach_controller" 00:25:03.836 } 00:25:03.836 EOF 00:25:03.836 )") 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.836 "ddgst": ${ddgst:-false} 00:25:03.836 }, 00:25:03.836 "method": "bdev_nvme_attach_controller" 00:25:03.836 } 00:25:03.836 EOF 00:25:03.836 )") 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.836 "ddgst": ${ddgst:-false} 00:25:03.836 }, 00:25:03.836 "method": "bdev_nvme_attach_controller" 00:25:03.836 } 00:25:03.836 EOF 00:25:03.836 )") 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.836 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.836 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.836 { 00:25:03.836 "params": { 00:25:03.836 "name": "Nvme$subsystem", 00:25:03.836 "trtype": "$TEST_TRANSPORT", 00:25:03.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.836 "adrfam": "ipv4", 00:25:03.836 "trsvcid": "$NVMF_PORT", 00:25:03.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.836 "hdgst": ${hdgst:-false}, 00:25:03.837 "ddgst": ${ddgst:-false} 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 } 00:25:03.837 EOF 00:25:03.837 )") 00:25:03.837 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.837 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.837 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.837 { 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme$subsystem", 00:25:03.837 "trtype": "$TEST_TRANSPORT", 00:25:03.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "$NVMF_PORT", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.837 "hdgst": ${hdgst:-false}, 00:25:03.837 "ddgst": ${ddgst:-false} 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 } 00:25:03.837 EOF 00:25:03.837 )") 00:25:03.837 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.837 03:57:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.837 03:57:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.837 { 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme$subsystem", 00:25:03.837 "trtype": "$TEST_TRANSPORT", 00:25:03.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "$NVMF_PORT", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.837 "hdgst": ${hdgst:-false}, 00:25:03.837 "ddgst": ${ddgst:-false} 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 } 00:25:03.837 EOF 00:25:03.837 )") 00:25:03.837 03:57:22 -- nvmf/common.sh@542 -- # cat 00:25:03.837 03:57:22 -- nvmf/common.sh@544 -- # jq . 00:25:03.837 03:57:22 -- nvmf/common.sh@545 -- # IFS=, 00:25:03.837 03:57:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme1", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme2", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme3", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme4", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme5", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme6", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme7", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme8", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme9", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 },{ 00:25:03.837 "params": { 00:25:03.837 "name": "Nvme10", 00:25:03.837 "trtype": "tcp", 00:25:03.837 "traddr": "10.0.0.2", 00:25:03.837 "adrfam": "ipv4", 00:25:03.837 "trsvcid": "4420", 00:25:03.837 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:03.837 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:03.837 "hdgst": false, 00:25:03.837 "ddgst": false 00:25:03.837 }, 00:25:03.837 "method": "bdev_nvme_attach_controller" 00:25:03.837 }' 00:25:03.837 [2024-07-14 03:57:22.590576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:03.837 [2024-07-14 03:57:22.590661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456925 ] 00:25:03.837 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.837 [2024-07-14 03:57:22.654364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.837 [2024-07-14 03:57:22.738685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.733 Running I/O for 10 seconds... 00:25:05.733 03:57:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:05.733 03:57:24 -- common/autotest_common.sh@852 -- # return 0 00:25:05.733 03:57:24 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:05.733 03:57:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.733 03:57:24 -- common/autotest_common.sh@10 -- # set +x 00:25:05.733 03:57:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.733 03:57:24 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.733 03:57:24 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:05.733 03:57:24 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:05.733 03:57:24 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:05.733 03:57:24 -- target/shutdown.sh@57 -- # local ret=1 00:25:05.733 03:57:24 -- target/shutdown.sh@58 -- # local i 00:25:05.733 03:57:24 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:05.733 03:57:24 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:05.733 03:57:24 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:05.733 03:57:24 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:05.733 03:57:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.733 03:57:24 -- common/autotest_common.sh@10 -- # set +x 00:25:05.733 03:57:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.733 03:57:24 -- target/shutdown.sh@60 -- # read_io_count=42 00:25:05.733 03:57:24 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:25:05.733 03:57:24 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:06.010 03:57:24 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:06.010 03:57:24 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:06.010 03:57:24 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:06.010 03:57:24 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:06.010 03:57:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.010 03:57:24 -- common/autotest_common.sh@10 -- # set +x 00:25:06.010 03:57:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.010 03:57:24 -- target/shutdown.sh@60 -- # read_io_count=129 00:25:06.010 03:57:24 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:25:06.010 03:57:24 -- target/shutdown.sh@64 -- # ret=0 00:25:06.010 03:57:24 -- target/shutdown.sh@65 -- # break 00:25:06.010 03:57:24 -- target/shutdown.sh@69 -- # return 0 00:25:06.010 03:57:24 -- target/shutdown.sh@134 -- # killprocess 2456613 00:25:06.010 03:57:24 -- common/autotest_common.sh@926 -- # '[' -z 2456613 ']' 00:25:06.010 03:57:24 -- common/autotest_common.sh@930 -- # kill -0 2456613 00:25:06.010 03:57:24 -- common/autotest_common.sh@931 -- # uname 00:25:06.010 03:57:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.010 03:57:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2456613 00:25:06.010 03:57:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:06.010 03:57:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:06.010 03:57:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2456613' 00:25:06.010 killing process with pid 2456613 00:25:06.010 03:57:24 -- common/autotest_common.sh@945 -- # kill 2456613 00:25:06.010 03:57:24 -- common/autotest_common.sh@950 -- # wait 2456613 00:25:06.010 [2024-07-14 03:57:24.753320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.010 [2024-07-14 03:57:24.753545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.753998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.754223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ff0 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.011 [2024-07-14 03:57:24.755801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.755993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.756091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64980 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.760408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62480 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.761998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.762132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62930 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.763101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.763135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.763158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.763171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.763183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.012 [2024-07-14 03:57:24.763195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-14 03:57:24.763719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 he state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 03:57:24.763754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 he state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.763781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.763794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.763807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.763820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with t[2024-07-14 03:57:24.763833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:25:06.013 id:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.763860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.763898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8ba0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb62dc0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.763959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.763979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.763999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5e40 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.764140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42eb0 is same with the state(5) to be set 00:25:06.013 [2024-07-14 03:57:24.764322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.013 [2024-07-14 03:57:24.764386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.013 [2024-07-14 03:57:24.764401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.014 [2024-07-14 03:57:24.764416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.014 [2024-07-14 03:57:24.764430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.014 [2024-07-14 03:57:24.764443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3530 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.764497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.014 [2024-07-14 03:57:24.764526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.014 [2024-07-14 03:57:24.764543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.014 [2024-07-14 03:57:24.764559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.014 [2024-07-14 03:57:24.764575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.014 [2024-07-14 03:57:24.764590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.014 [2024-07-14 03:57:24.764604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.014 [2024-07-14 03:57:24.764618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.014 [2024-07-14 03:57:24.764632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d120a0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.765751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63270 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.766300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63720 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63bb0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb64040 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.767996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.014 [2024-07-14 03:57:24.768427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.768693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb644d0 is same with the state(5) to be set 00:25:06.015 [2024-07-14 03:57:24.781342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.781976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.781993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.015 [2024-07-14 03:57:24.782405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.015 [2024-07-14 03:57:24.782422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.782980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.782995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.783546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.783561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d903f0 is same with the state(5) to be set 00:25:06.016 [2024-07-14 03:57:24.783649] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d903f0 was disconnected and freed. reset controller. 00:25:06.016 [2024-07-14 03:57:24.791442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.791511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.791568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.791586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.791602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.791619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.791635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.791652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.791668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.791684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.791700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.016 [2024-07-14 03:57:24.791718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-07-14 03:57:24.791733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.791762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.791779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.791796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.791812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.791828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.791844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.791861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.791888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.791922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.791939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.791954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.791971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.791986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.792975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.792992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.793011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.793029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.793045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.793062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.793078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.793094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.793109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.793125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.793140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.017 [2024-07-14 03:57:24.793156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-07-14 03:57:24.793172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.793625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.793640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d92fb0 is same with the state(5) to be set 00:25:06.018 [2024-07-14 03:57:24.793727] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d92fb0 was disconnected and freed. reset controller. 00:25:06.018 [2024-07-14 03:57:24.801589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b850 is same with the state(5) to be set 00:25:06.018 [2024-07-14 03:57:24.801830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.801963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.801977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4950 is same with the state(5) to be set 00:25:06.018 [2024-07-14 03:57:24.802014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8ba0 (9): Bad file descriptor 00:25:06.018 [2024-07-14 03:57:24.802048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5e40 (9): Bad file descriptor 00:25:06.018 [2024-07-14 03:57:24.802099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b420 is same with the state(5) to be set 00:25:06.018 [2024-07-14 03:57:24.802255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c42eb0 (9): Bad file descriptor 00:25:06.018 [2024-07-14 03:57:24.802289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf3530 (9): Bad file descriptor 00:25:06.018 [2024-07-14 03:57:24.802344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e641f0 is same with the state(5) to be set 00:25:06.018 [2024-07-14 03:57:24.802502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d120a0 (9): Bad file descriptor 00:25:06.018 [2024-07-14 03:57:24.802551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.018 [2024-07-14 03:57:24.802671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d079f0 is same with the state(5) to be set 00:25:06.018 [2024-07-14 03:57:24.802914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.802940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.802970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.802987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.803005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.018 [2024-07-14 03:57:24.803020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.018 [2024-07-14 03:57:24.803037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.019 [2024-07-14 03:57:24.803623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.019 [2024-07-14 03:57:24.803638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.803971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.803988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.804981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.020 [2024-07-14 03:57:24.804996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.020 [2024-07-14 03:57:24.805090] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd5730 was disconnected and freed. reset controller. 00:25:06.020 [2024-07-14 03:57:24.805155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.805981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.805996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.021 [2024-07-14 03:57:24.806527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.021 [2024-07-14 03:57:24.806542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.806979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.806993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807311] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e17af0 was disconnected and freed. reset controller. 00:25:06.022 [2024-07-14 03:57:24.807372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.807975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.807990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.808007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.808022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.022 [2024-07-14 03:57:24.808038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.022 [2024-07-14 03:57:24.808053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.808972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.808987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.809018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.809050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.809085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.809118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.809150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.809181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.023 [2024-07-14 03:57:24.809212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.023 [2024-07-14 03:57:24.809229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.809243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.809260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.809275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.809291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.809306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.809322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.809337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.809353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.809368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.809384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.809399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.809416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.809431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.809524] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e19030 was disconnected and freed. reset controller. 00:25:06.024 [2024-07-14 03:57:24.810732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.810757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.810782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.810799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.810816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.810832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.810849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.810871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.810889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.810905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.810922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.810938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.810955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.810970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.810987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.024 [2024-07-14 03:57:24.811887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.024 [2024-07-14 03:57:24.811904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.811919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.811936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.811951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.811967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.811986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.812828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.812844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d919d0 is same with the state(5) to be set 00:25:06.025 [2024-07-14 03:57:24.812925] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d919d0 was disconnected and freed. reset controller. 00:25:06.025 [2024-07-14 03:57:24.814123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.025 [2024-07-14 03:57:24.814541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.025 [2024-07-14 03:57:24.814558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.814983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.814997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.026 [2024-07-14 03:57:24.815942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.026 [2024-07-14 03:57:24.815958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.815973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.815989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.816004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.816020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.816035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.816051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.816065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.816086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.816101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.816118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.816133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.816149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.816164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.816180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.816195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.816282] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d94590 was disconnected and freed. reset controller. 00:25:06.027 [2024-07-14 03:57:24.816508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:06.027 [2024-07-14 03:57:24.816554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d079f0 (9): Bad file descriptor 00:25:06.027 [2024-07-14 03:57:24.816619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b850 (9): Bad file descriptor 00:25:06.027 [2024-07-14 03:57:24.816647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4950 (9): Bad file descriptor 00:25:06.027 [2024-07-14 03:57:24.816671] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.027 [2024-07-14 03:57:24.816701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b420 (9): Bad file descriptor 00:25:06.027 [2024-07-14 03:57:24.816731] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.027 [2024-07-14 03:57:24.816754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e641f0 (9): Bad file descriptor 00:25:06.027 [2024-07-14 03:57:24.816777] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.027 [2024-07-14 03:57:24.823364] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.027 [2024-07-14 03:57:24.823463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:06.027 [2024-07-14 03:57:24.823612] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:06.027 [2024-07-14 03:57:24.824326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.824981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.824998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.825013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.825029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.825045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.825061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.825077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.825093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.027 [2024-07-14 03:57:24.825108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.027 [2024-07-14 03:57:24.825124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2d4dbb0 is same with the state(5) to be set 00:25:06.028 [2024-07-14 03:57:24.825619] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2d4dbb0 was disconnected and freed. reset controller. 00:25:06.028 [2024-07-14 03:57:24.825682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:06.028 [2024-07-14 03:57:24.825709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:06.028 [2024-07-14 03:57:24.825728] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:06.028 [2024-07-14 03:57:24.825956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-14 03:57:24.826127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-14 03:57:24.826154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d079f0 with addr=10.0.0.2, port=4420 00:25:06.028 [2024-07-14 03:57:24.826171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d079f0 is same with the state(5) to be set 00:25:06.028 [2024-07-14 03:57:24.826324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-14 03:57:24.826470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-14 03:57:24.826496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2b850 with addr=10.0.0.2, port=4420 00:25:06.028 [2024-07-14 03:57:24.826513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b850 is same with the state(5) to be set 00:25:06.028 [2024-07-14 03:57:24.826574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.826977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.826997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.028 [2024-07-14 03:57:24.827774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.028 [2024-07-14 03:57:24.827793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.827811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.827826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.827844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.827859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.827886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.827917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.827937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.827953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.827969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.827984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.828659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.828674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce03a0 is same with the state(5) to be set 00:25:06.029 [2024-07-14 03:57:24.832137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.029 [2024-07-14 03:57:24.832666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.029 [2024-07-14 03:57:24.832682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.832977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.832992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.833970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.833985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.834001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.834017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.030 [2024-07-14 03:57:24.834033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.030 [2024-07-14 03:57:24.834048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.031 [2024-07-14 03:57:24.834064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.031 [2024-07-14 03:57:24.834079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.031 [2024-07-14 03:57:24.834096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.031 [2024-07-14 03:57:24.834111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.031 [2024-07-14 03:57:24.834130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.031 [2024-07-14 03:57:24.834146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.031 [2024-07-14 03:57:24.834162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.031 [2024-07-14 03:57:24.834177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.031 [2024-07-14 03:57:24.834193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.031 [2024-07-14 03:57:24.834208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.031 [2024-07-14 03:57:24.834223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ef0700 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.836371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:06.031 [2024-07-14 03:57:24.836406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:06.031 [2024-07-14 03:57:24.836443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.031 [2024-07-14 03:57:24.836785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.836970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.836999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf3530 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.837017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3530 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.837170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.837342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.837367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d120a0 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.837383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d120a0 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.837545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.837698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.837723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce8ba0 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.837740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce8ba0 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.837767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d079f0 (9): Bad file descriptor 00:25:06.031 [2024-07-14 03:57:24.837789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b850 (9): Bad file descriptor 00:25:06.031 [2024-07-14 03:57:24.837819] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.031 [2024-07-14 03:57:24.837850] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.031 [2024-07-14 03:57:24.837902] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.031 [2024-07-14 03:57:24.837926] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.031 [2024-07-14 03:57:24.837947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce8ba0 (9): Bad file descriptor 00:25:06.031 [2024-07-14 03:57:24.837979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d120a0 (9): Bad file descriptor 00:25:06.031 [2024-07-14 03:57:24.838004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf3530 (9): Bad file descriptor 00:25:06.031 task offset: 29184 on job bdev=Nvme5n1 fails 00:25:06.031 00:25:06.031 Latency(us) 00:25:06.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.031 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme1n1 ended in about 0.62 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme1n1 : 0.62 263.46 16.47 102.81 0.00 173406.81 100973.99 166995.44 00:25:06.031 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme2n1 ended in about 0.61 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme2n1 : 0.61 340.60 21.29 104.80 0.00 140776.75 36311.80 161558.38 00:25:06.031 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme3n1 ended in about 0.61 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme3n1 : 0.61 339.99 21.25 104.61 0.00 139266.14 69905.07 116508.44 00:25:06.031 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme4n1 ended in about 0.61 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme4n1 : 0.61 339.38 21.21 104.43 0.00 137719.83 63302.92 114178.28 00:25:06.031 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme5n1 ended in about 0.60 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme5n1 : 0.60 344.71 21.54 106.06 0.00 133754.84 46409.20 114178.28 00:25:06.031 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme6n1 ended in about 0.61 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme6n1 : 0.61 338.75 21.17 104.23 0.00 134483.73 29127.11 120392.06 00:25:06.031 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme7n1 ended in about 0.61 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme7n1 : 0.61 342.80 21.43 105.48 0.00 130974.91 40195.41 116508.44 00:25:06.031 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme8n1 ended in about 0.62 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme8n1 : 0.62 338.14 21.13 104.04 0.00 131237.09 20194.80 132819.63 00:25:06.031 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme9n1 ended in about 0.62 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme9n1 : 0.62 328.12 20.51 36.81 0.00 155659.71 8155.59 134373.07 00:25:06.031 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.031 Job: Nvme10n1 ended in about 0.63 seconds with error 00:25:06.031 Verification LBA range: start 0x0 length 0x400 00:25:06.031 Nvme10n1 : 0.63 261.15 16.32 101.91 0.00 155929.69 98255.45 121945.51 00:25:06.031 =================================================================================================================== 00:25:06.031 Total : 3237.10 202.32 975.19 0.00 142385.30 8155.59 166995.44 00:25:06.031 [2024-07-14 03:57:24.865451] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:06.031 [2024-07-14 03:57:24.865544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:06.031 [2024-07-14 03:57:24.865578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:06.031 [2024-07-14 03:57:24.865945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.866133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.866161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e641f0 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.866182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e641f0 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.866326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.866479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.866504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2b420 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.866521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b420 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.866678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.866827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.866853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5e40 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.866875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5e40 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.866900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:06.031 [2024-07-14 03:57:24.866915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:06.031 [2024-07-14 03:57:24.866933] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:06.031 [2024-07-14 03:57:24.866960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:06.031 [2024-07-14 03:57:24.866976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:06.031 [2024-07-14 03:57:24.866990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:06.031 [2024-07-14 03:57:24.867899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.031 [2024-07-14 03:57:24.867926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.031 [2024-07-14 03:57:24.868112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.868271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.868297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c42eb0 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.868315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42eb0 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.868461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.868616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-14 03:57:24.868642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cf4950 with addr=10.0.0.2, port=4420 00:25:06.031 [2024-07-14 03:57:24.868659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf4950 is same with the state(5) to be set 00:25:06.031 [2024-07-14 03:57:24.868686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e641f0 (9): Bad file descriptor 00:25:06.031 [2024-07-14 03:57:24.868709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b420 (9): Bad file descriptor 00:25:06.031 [2024-07-14 03:57:24.868728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5e40 (9): Bad file descriptor 00:25:06.031 [2024-07-14 03:57:24.868751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.868766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.868780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:06.032 [2024-07-14 03:57:24.868800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.868816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.868830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:06.032 [2024-07-14 03:57:24.868853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.868894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.868911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:06.032 [2024-07-14 03:57:24.868975] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.032 [2024-07-14 03:57:24.868999] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.032 [2024-07-14 03:57:24.869020] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.032 [2024-07-14 03:57:24.869039] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.032 [2024-07-14 03:57:24.869058] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.032 [2024-07-14 03:57:24.869077] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.032 [2024-07-14 03:57:24.869147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c42eb0 (9): Bad file descriptor 00:25:06.032 [2024-07-14 03:57:24.869228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf4950 (9): Bad file descriptor 00:25:06.032 [2024-07-14 03:57:24.869245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.869259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.869274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:06.032 [2024-07-14 03:57:24.869292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.869307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.869321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:06.032 [2024-07-14 03:57:24.869339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.869354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.869368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.032 [2024-07-14 03:57:24.869435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:06.032 [2024-07-14 03:57:24.869464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:06.032 [2024-07-14 03:57:24.869483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.869555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.869570] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:06.032 [2024-07-14 03:57:24.869587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.869601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.869615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:06.032 [2024-07-14 03:57:24.869655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.869863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-14 03:57:24.870026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-14 03:57:24.870052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2b850 with addr=10.0.0.2, port=4420 00:25:06.032 [2024-07-14 03:57:24.870069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b850 is same with the state(5) to be set 00:25:06.032 [2024-07-14 03:57:24.870366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-14 03:57:24.870519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-14 03:57:24.870545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d079f0 with addr=10.0.0.2, port=4420 00:25:06.032 [2024-07-14 03:57:24.870562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d079f0 is same with the state(5) to be set 00:25:06.032 [2024-07-14 03:57:24.870606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b850 (9): Bad file descriptor 00:25:06.032 [2024-07-14 03:57:24.870631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d079f0 (9): Bad file descriptor 00:25:06.032 [2024-07-14 03:57:24.870669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.870690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.870705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:06.032 [2024-07-14 03:57:24.870722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:06.032 [2024-07-14 03:57:24.870737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:06.032 [2024-07-14 03:57:24.870750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:06.032 [2024-07-14 03:57:24.870787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.032 [2024-07-14 03:57:24.870805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.599 03:57:25 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:06.599 03:57:25 -- target/shutdown.sh@138 -- # sleep 1 00:25:07.537 03:57:26 -- target/shutdown.sh@141 -- # kill -9 2456925 00:25:07.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2456925) - No such process 00:25:07.537 03:57:26 -- target/shutdown.sh@141 -- # true 00:25:07.537 03:57:26 -- target/shutdown.sh@143 -- # stoptarget 00:25:07.537 03:57:26 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:07.537 03:57:26 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:07.537 03:57:26 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:07.537 03:57:26 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:07.537 03:57:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.537 03:57:26 -- nvmf/common.sh@116 -- # sync 00:25:07.537 03:57:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:07.537 03:57:26 -- nvmf/common.sh@119 -- # set +e 00:25:07.537 03:57:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.537 03:57:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:07.537 rmmod nvme_tcp 00:25:07.537 rmmod nvme_fabrics 00:25:07.537 rmmod nvme_keyring 00:25:07.537 03:57:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:07.537 03:57:26 -- nvmf/common.sh@123 -- # set -e 00:25:07.537 03:57:26 -- nvmf/common.sh@124 -- # return 0 00:25:07.537 03:57:26 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:07.537 03:57:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.537 03:57:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:07.537 03:57:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:07.537 03:57:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.537 03:57:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:07.537 03:57:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.537 03:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.537 03:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.439 03:57:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:09.439 00:25:09.439 real 0m7.576s 00:25:09.439 user 0m18.705s 00:25:09.439 sys 0m1.358s 00:25:09.439 03:57:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.439 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.439 ************************************ 00:25:09.439 END TEST nvmf_shutdown_tc3 00:25:09.439 ************************************ 00:25:09.439 03:57:28 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:09.439 00:25:09.439 real 0m28.559s 00:25:09.439 user 1m22.634s 00:25:09.439 sys 0m6.403s 00:25:09.439 03:57:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.439 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.439 ************************************ 00:25:09.439 END TEST nvmf_shutdown 00:25:09.439 ************************************ 00:25:09.697 03:57:28 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:09.697 03:57:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:09.697 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.697 03:57:28 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:09.697 03:57:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:09.697 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.697 03:57:28 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:09.697 03:57:28 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:09.697 03:57:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:09.697 03:57:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:09.697 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:25:09.697 ************************************ 00:25:09.697 START TEST nvmf_multicontroller 00:25:09.697 ************************************ 00:25:09.697 03:57:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:09.697 * Looking for test storage... 00:25:09.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.697 03:57:28 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.697 03:57:28 -- nvmf/common.sh@7 -- # uname -s 00:25:09.697 03:57:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.697 03:57:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.697 03:57:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.697 03:57:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.697 03:57:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.697 03:57:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.697 03:57:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.697 03:57:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.697 03:57:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.697 03:57:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.697 03:57:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.697 03:57:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.697 03:57:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.697 03:57:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.697 03:57:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.697 03:57:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.697 03:57:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.697 03:57:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.698 03:57:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.698 03:57:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.698 03:57:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.698 03:57:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.698 03:57:28 -- paths/export.sh@5 -- # export PATH 00:25:09.698 03:57:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.698 03:57:28 -- nvmf/common.sh@46 -- # : 0 00:25:09.698 03:57:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:09.698 03:57:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:09.698 03:57:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:09.698 03:57:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.698 03:57:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.698 03:57:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:09.698 03:57:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:09.698 03:57:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:09.698 03:57:28 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.698 03:57:28 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.698 03:57:28 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:09.698 03:57:28 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:09.698 03:57:28 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.698 03:57:28 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:09.698 03:57:28 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:09.698 03:57:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:09.698 03:57:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.698 03:57:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:09.698 03:57:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:09.698 03:57:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:09.698 03:57:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.698 03:57:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.698 03:57:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.698 03:57:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:09.698 03:57:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:09.698 03:57:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:09.698 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.597 03:57:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:11.597 03:57:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:11.597 03:57:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:11.597 03:57:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:11.597 03:57:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:11.597 03:57:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:11.597 03:57:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:11.597 03:57:30 -- nvmf/common.sh@294 -- # net_devs=() 00:25:11.597 03:57:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:11.597 03:57:30 -- nvmf/common.sh@295 -- # e810=() 00:25:11.597 03:57:30 -- nvmf/common.sh@295 -- # local -ga e810 00:25:11.597 03:57:30 -- nvmf/common.sh@296 -- # x722=() 00:25:11.597 03:57:30 -- nvmf/common.sh@296 -- # local -ga x722 00:25:11.597 03:57:30 -- nvmf/common.sh@297 -- # mlx=() 00:25:11.597 03:57:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:11.597 03:57:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.597 03:57:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:11.597 03:57:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:11.597 03:57:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:11.597 03:57:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.597 03:57:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:11.597 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:11.597 03:57:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.597 03:57:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:11.597 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:11.597 03:57:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:11.597 03:57:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.597 03:57:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.597 03:57:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.597 03:57:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.597 03:57:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:11.597 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:11.597 03:57:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.597 03:57:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.597 03:57:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.597 03:57:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.597 03:57:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.597 03:57:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:11.597 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:11.597 03:57:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.597 03:57:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:11.597 03:57:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:11.597 03:57:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:11.597 03:57:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:11.597 03:57:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.597 03:57:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.597 03:57:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.597 03:57:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:11.597 03:57:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.597 03:57:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.597 03:57:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:11.597 03:57:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.597 03:57:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.597 03:57:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:11.597 03:57:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:11.597 03:57:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.597 03:57:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.597 03:57:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.597 03:57:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.597 03:57:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:11.597 03:57:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.856 03:57:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.856 03:57:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.856 03:57:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:11.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:11.856 00:25:11.856 --- 10.0.0.2 ping statistics --- 00:25:11.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.856 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:11.856 03:57:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:25:11.856 00:25:11.856 --- 10.0.0.1 ping statistics --- 00:25:11.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.856 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:11.856 03:57:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.856 03:57:30 -- nvmf/common.sh@410 -- # return 0 00:25:11.856 03:57:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:11.856 03:57:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.856 03:57:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:11.856 03:57:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:11.856 03:57:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.856 03:57:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:11.856 03:57:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:11.856 03:57:30 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:11.856 03:57:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:11.856 03:57:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.856 03:57:30 -- common/autotest_common.sh@10 -- # set +x 00:25:11.856 03:57:30 -- nvmf/common.sh@469 -- # nvmfpid=2459345 00:25:11.856 03:57:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:11.857 03:57:30 -- nvmf/common.sh@470 -- # waitforlisten 2459345 00:25:11.857 03:57:30 -- common/autotest_common.sh@819 -- # '[' -z 2459345 ']' 00:25:11.857 03:57:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.857 03:57:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.857 03:57:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.857 03:57:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.857 03:57:30 -- common/autotest_common.sh@10 -- # set +x 00:25:11.857 [2024-07-14 03:57:30.658014] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:11.857 [2024-07-14 03:57:30.658090] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.857 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.857 [2024-07-14 03:57:30.723633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:12.116 [2024-07-14 03:57:30.812371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:12.116 [2024-07-14 03:57:30.812500] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.116 [2024-07-14 03:57:30.812516] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.116 [2024-07-14 03:57:30.812528] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.116 [2024-07-14 03:57:30.812610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.116 [2024-07-14 03:57:30.812673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.116 [2024-07-14 03:57:30.812677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.694 03:57:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:12.694 03:57:31 -- common/autotest_common.sh@852 -- # return 0 00:25:12.694 03:57:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:12.694 03:57:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:12.694 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 03:57:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.953 03:57:31 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 [2024-07-14 03:57:31.638646] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 Malloc0 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 [2024-07-14 03:57:31.695489] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 [2024-07-14 03:57:31.703407] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 Malloc1 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:12.953 03:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 03:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.953 03:57:31 -- host/multicontroller.sh@44 -- # bdevperf_pid=2459503 00:25:12.953 03:57:31 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:12.953 03:57:31 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:12.953 03:57:31 -- host/multicontroller.sh@47 -- # waitforlisten 2459503 /var/tmp/bdevperf.sock 00:25:12.953 03:57:31 -- common/autotest_common.sh@819 -- # '[' -z 2459503 ']' 00:25:12.953 03:57:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.953 03:57:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:12.953 03:57:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.953 03:57:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:12.953 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:25:13.888 03:57:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:13.888 03:57:32 -- common/autotest_common.sh@852 -- # return 0 00:25:13.888 03:57:32 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:13.888 03:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.888 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.147 NVMe0n1 00:25:14.147 03:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.147 03:57:32 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.147 03:57:32 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:14.147 03:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.147 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.147 03:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.147 1 00:25:14.147 03:57:32 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:14.147 03:57:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.147 03:57:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:14.147 03:57:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.147 03:57:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:14.147 03:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.147 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.147 request: 00:25:14.147 { 00:25:14.147 "name": "NVMe0", 00:25:14.147 "trtype": "tcp", 00:25:14.147 "traddr": "10.0.0.2", 00:25:14.147 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:14.147 "hostaddr": "10.0.0.2", 00:25:14.147 "hostsvcid": "60000", 00:25:14.147 "adrfam": "ipv4", 00:25:14.147 "trsvcid": "4420", 00:25:14.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.147 "method": "bdev_nvme_attach_controller", 00:25:14.147 "req_id": 1 00:25:14.147 } 00:25:14.147 Got JSON-RPC error response 00:25:14.147 response: 00:25:14.147 { 00:25:14.147 "code": -114, 00:25:14.147 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:14.147 } 00:25:14.147 03:57:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.147 03:57:32 -- common/autotest_common.sh@643 -- # es=1 00:25:14.147 03:57:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.147 03:57:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.147 03:57:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.147 03:57:32 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:14.147 03:57:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.147 03:57:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:14.147 03:57:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.147 03:57:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:14.147 03:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.147 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.147 request: 00:25:14.147 { 00:25:14.147 "name": "NVMe0", 00:25:14.147 "trtype": "tcp", 00:25:14.147 "traddr": "10.0.0.2", 00:25:14.147 "hostaddr": "10.0.0.2", 00:25:14.147 "hostsvcid": "60000", 00:25:14.147 "adrfam": "ipv4", 00:25:14.147 "trsvcid": "4420", 00:25:14.147 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:14.147 "method": "bdev_nvme_attach_controller", 00:25:14.147 "req_id": 1 00:25:14.147 } 00:25:14.147 Got JSON-RPC error response 00:25:14.147 response: 00:25:14.147 { 00:25:14.147 "code": -114, 00:25:14.147 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:14.147 } 00:25:14.147 03:57:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.147 03:57:32 -- common/autotest_common.sh@643 -- # es=1 00:25:14.147 03:57:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.147 03:57:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.147 03:57:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.147 03:57:32 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:14.147 03:57:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.147 03:57:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:14.147 03:57:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.147 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.147 03:57:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:14.148 03:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.148 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.148 request: 00:25:14.148 { 00:25:14.148 "name": "NVMe0", 00:25:14.148 "trtype": "tcp", 00:25:14.148 "traddr": "10.0.0.2", 00:25:14.148 "hostaddr": "10.0.0.2", 00:25:14.148 "hostsvcid": "60000", 00:25:14.148 "adrfam": "ipv4", 00:25:14.148 "trsvcid": "4420", 00:25:14.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.148 "multipath": "disable", 00:25:14.148 "method": "bdev_nvme_attach_controller", 00:25:14.148 "req_id": 1 00:25:14.148 } 00:25:14.148 Got JSON-RPC error response 00:25:14.148 response: 00:25:14.148 { 00:25:14.148 "code": -114, 00:25:14.148 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:14.148 } 00:25:14.148 03:57:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.148 03:57:32 -- common/autotest_common.sh@643 -- # es=1 00:25:14.148 03:57:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.148 03:57:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.148 03:57:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.148 03:57:32 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:14.148 03:57:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:14.148 03:57:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:14.148 03:57:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:14.148 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.148 03:57:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:14.148 03:57:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:14.148 03:57:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:14.148 03:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.148 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.148 request: 00:25:14.148 { 00:25:14.148 "name": "NVMe0", 00:25:14.148 "trtype": "tcp", 00:25:14.148 "traddr": "10.0.0.2", 00:25:14.148 "hostaddr": "10.0.0.2", 00:25:14.148 "hostsvcid": "60000", 00:25:14.148 "adrfam": "ipv4", 00:25:14.148 "trsvcid": "4420", 00:25:14.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.148 "multipath": "failover", 00:25:14.148 "method": "bdev_nvme_attach_controller", 00:25:14.148 "req_id": 1 00:25:14.148 } 00:25:14.148 Got JSON-RPC error response 00:25:14.148 response: 00:25:14.148 { 00:25:14.148 "code": -114, 00:25:14.148 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:14.148 } 00:25:14.148 03:57:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:14.148 03:57:32 -- common/autotest_common.sh@643 -- # es=1 00:25:14.148 03:57:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:14.148 03:57:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:14.148 03:57:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:14.148 03:57:32 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.148 03:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.148 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 00:25:14.409 03:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.409 03:57:33 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.409 03:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.409 03:57:33 -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 03:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.409 03:57:33 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:14.409 03:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.409 03:57:33 -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 00:25:14.409 03:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.409 03:57:33 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.409 03:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.409 03:57:33 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:14.409 03:57:33 -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 03:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.409 03:57:33 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:14.409 03:57:33 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:15.783 0 00:25:15.783 03:57:34 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:15.783 03:57:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.783 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:25:15.783 03:57:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.783 03:57:34 -- host/multicontroller.sh@100 -- # killprocess 2459503 00:25:15.783 03:57:34 -- common/autotest_common.sh@926 -- # '[' -z 2459503 ']' 00:25:15.783 03:57:34 -- common/autotest_common.sh@930 -- # kill -0 2459503 00:25:15.783 03:57:34 -- common/autotest_common.sh@931 -- # uname 00:25:15.783 03:57:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:15.783 03:57:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2459503 00:25:15.783 03:57:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:15.783 03:57:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:15.783 03:57:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2459503' 00:25:15.783 killing process with pid 2459503 00:25:15.783 03:57:34 -- common/autotest_common.sh@945 -- # kill 2459503 00:25:15.783 03:57:34 -- common/autotest_common.sh@950 -- # wait 2459503 00:25:15.783 03:57:34 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.783 03:57:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.783 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:25:15.783 03:57:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.783 03:57:34 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:15.783 03:57:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.783 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:25:15.783 03:57:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.783 03:57:34 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:15.783 03:57:34 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.783 03:57:34 -- common/autotest_common.sh@1597 -- # read -r file 00:25:15.783 03:57:34 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:15.783 03:57:34 -- common/autotest_common.sh@1596 -- # sort -u 00:25:15.783 03:57:34 -- common/autotest_common.sh@1598 -- # cat 00:25:15.783 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:15.783 [2024-07-14 03:57:31.798478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:15.783 [2024-07-14 03:57:31.798565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459503 ] 00:25:15.783 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.783 [2024-07-14 03:57:31.859300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.783 [2024-07-14 03:57:31.943729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.783 [2024-07-14 03:57:33.254395] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name a83b1f9d-085c-41ee-875e-055e7132163f already exists 00:25:15.783 [2024-07-14 03:57:33.254436] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:a83b1f9d-085c-41ee-875e-055e7132163f alias for bdev NVMe1n1 00:25:15.783 [2024-07-14 03:57:33.254454] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:15.783 Running I/O for 1 seconds... 00:25:15.783 00:25:15.783 Latency(us) 00:25:15.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.783 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:15.783 NVMe0n1 : 1.00 19449.91 75.98 0.00 0.00 6563.61 4490.43 13204.29 00:25:15.783 =================================================================================================================== 00:25:15.783 Total : 19449.91 75.98 0.00 0.00 6563.61 4490.43 13204.29 00:25:15.783 Received shutdown signal, test time was about 1.000000 seconds 00:25:15.783 00:25:15.783 Latency(us) 00:25:15.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.783 =================================================================================================================== 00:25:15.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.783 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:15.783 03:57:34 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.783 03:57:34 -- common/autotest_common.sh@1597 -- # read -r file 00:25:15.783 03:57:34 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:15.783 03:57:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:15.783 03:57:34 -- nvmf/common.sh@116 -- # sync 00:25:15.783 03:57:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:15.783 03:57:34 -- nvmf/common.sh@119 -- # set +e 00:25:15.783 03:57:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:15.783 03:57:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:15.783 rmmod nvme_tcp 00:25:15.783 rmmod nvme_fabrics 00:25:15.783 rmmod nvme_keyring 00:25:16.041 03:57:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:16.041 03:57:34 -- nvmf/common.sh@123 -- # set -e 00:25:16.041 03:57:34 -- nvmf/common.sh@124 -- # return 0 00:25:16.041 03:57:34 -- nvmf/common.sh@477 -- # '[' -n 2459345 ']' 00:25:16.041 03:57:34 -- nvmf/common.sh@478 -- # killprocess 2459345 00:25:16.041 03:57:34 -- common/autotest_common.sh@926 -- # '[' -z 2459345 ']' 00:25:16.041 03:57:34 -- common/autotest_common.sh@930 -- # kill -0 2459345 00:25:16.041 03:57:34 -- common/autotest_common.sh@931 -- # uname 00:25:16.041 03:57:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.041 03:57:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2459345 00:25:16.041 03:57:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:16.041 03:57:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:16.041 03:57:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2459345' 00:25:16.041 killing process with pid 2459345 00:25:16.041 03:57:34 -- common/autotest_common.sh@945 -- # kill 2459345 00:25:16.041 03:57:34 -- common/autotest_common.sh@950 -- # wait 2459345 00:25:16.300 03:57:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:16.300 03:57:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:16.300 03:57:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:16.300 03:57:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.300 03:57:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:16.300 03:57:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.300 03:57:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.300 03:57:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.203 03:57:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:18.203 00:25:18.203 real 0m8.649s 00:25:18.203 user 0m16.616s 00:25:18.203 sys 0m2.258s 00:25:18.203 03:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.203 03:57:37 -- common/autotest_common.sh@10 -- # set +x 00:25:18.203 ************************************ 00:25:18.203 END TEST nvmf_multicontroller 00:25:18.203 ************************************ 00:25:18.203 03:57:37 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:18.203 03:57:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:18.203 03:57:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:18.203 03:57:37 -- common/autotest_common.sh@10 -- # set +x 00:25:18.203 ************************************ 00:25:18.203 START TEST nvmf_aer 00:25:18.203 ************************************ 00:25:18.203 03:57:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:18.203 * Looking for test storage... 00:25:18.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.461 03:57:37 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.461 03:57:37 -- nvmf/common.sh@7 -- # uname -s 00:25:18.461 03:57:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.461 03:57:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.461 03:57:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.461 03:57:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.461 03:57:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.461 03:57:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.461 03:57:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.461 03:57:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.461 03:57:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.461 03:57:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.461 03:57:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.461 03:57:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.461 03:57:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.461 03:57:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.461 03:57:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.461 03:57:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.462 03:57:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.462 03:57:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.462 03:57:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.462 03:57:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.462 03:57:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.462 03:57:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.462 03:57:37 -- paths/export.sh@5 -- # export PATH 00:25:18.462 03:57:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.462 03:57:37 -- nvmf/common.sh@46 -- # : 0 00:25:18.462 03:57:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:18.462 03:57:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:18.462 03:57:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:18.462 03:57:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.462 03:57:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.462 03:57:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:18.462 03:57:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:18.462 03:57:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:18.462 03:57:37 -- host/aer.sh@11 -- # nvmftestinit 00:25:18.462 03:57:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:18.462 03:57:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.462 03:57:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:18.462 03:57:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:18.462 03:57:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:18.462 03:57:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.462 03:57:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.462 03:57:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.462 03:57:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:18.462 03:57:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:18.462 03:57:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:18.462 03:57:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.361 03:57:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:20.361 03:57:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:20.361 03:57:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:20.361 03:57:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:20.361 03:57:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:20.361 03:57:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:20.361 03:57:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:20.361 03:57:39 -- nvmf/common.sh@294 -- # net_devs=() 00:25:20.361 03:57:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:20.361 03:57:39 -- nvmf/common.sh@295 -- # e810=() 00:25:20.361 03:57:39 -- nvmf/common.sh@295 -- # local -ga e810 00:25:20.361 03:57:39 -- nvmf/common.sh@296 -- # x722=() 00:25:20.361 03:57:39 -- nvmf/common.sh@296 -- # local -ga x722 00:25:20.361 03:57:39 -- nvmf/common.sh@297 -- # mlx=() 00:25:20.361 03:57:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:20.361 03:57:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.361 03:57:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.361 03:57:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.361 03:57:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.361 03:57:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.361 03:57:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.361 03:57:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.362 03:57:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.362 03:57:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.362 03:57:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.362 03:57:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.362 03:57:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:20.362 03:57:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:20.362 03:57:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:20.362 03:57:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.362 03:57:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:20.362 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:20.362 03:57:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.362 03:57:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:20.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:20.362 03:57:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:20.362 03:57:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.362 03:57:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.362 03:57:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.362 03:57:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.362 03:57:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:20.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:20.362 03:57:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.362 03:57:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.362 03:57:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.362 03:57:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.362 03:57:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.362 03:57:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:20.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:20.362 03:57:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.362 03:57:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:20.362 03:57:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:20.362 03:57:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:20.362 03:57:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.362 03:57:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.362 03:57:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.362 03:57:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:20.362 03:57:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.362 03:57:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.362 03:57:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:20.362 03:57:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.362 03:57:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.362 03:57:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:20.362 03:57:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:20.362 03:57:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.362 03:57:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.362 03:57:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.362 03:57:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.362 03:57:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:20.362 03:57:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.362 03:57:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.362 03:57:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.362 03:57:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:20.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:25:20.362 00:25:20.362 --- 10.0.0.2 ping statistics --- 00:25:20.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.362 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:25:20.362 03:57:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:25:20.362 00:25:20.362 --- 10.0.0.1 ping statistics --- 00:25:20.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.362 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:20.362 03:57:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.362 03:57:39 -- nvmf/common.sh@410 -- # return 0 00:25:20.362 03:57:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:20.362 03:57:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.362 03:57:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:20.362 03:57:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.362 03:57:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:20.362 03:57:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:20.362 03:57:39 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:20.362 03:57:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:20.362 03:57:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.362 03:57:39 -- common/autotest_common.sh@10 -- # set +x 00:25:20.362 03:57:39 -- nvmf/common.sh@469 -- # nvmfpid=2461811 00:25:20.362 03:57:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:20.362 03:57:39 -- nvmf/common.sh@470 -- # waitforlisten 2461811 00:25:20.362 03:57:39 -- common/autotest_common.sh@819 -- # '[' -z 2461811 ']' 00:25:20.362 03:57:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.362 03:57:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.362 03:57:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.362 03:57:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.362 03:57:39 -- common/autotest_common.sh@10 -- # set +x 00:25:20.620 [2024-07-14 03:57:39.340936] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:20.620 [2024-07-14 03:57:39.341014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.620 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.620 [2024-07-14 03:57:39.410123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.620 [2024-07-14 03:57:39.499791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:20.620 [2024-07-14 03:57:39.499965] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.620 [2024-07-14 03:57:39.499988] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.620 [2024-07-14 03:57:39.500004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.620 [2024-07-14 03:57:39.500089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.620 [2024-07-14 03:57:39.500145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.620 [2024-07-14 03:57:39.500247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.620 [2024-07-14 03:57:39.500250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.553 03:57:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:21.553 03:57:40 -- common/autotest_common.sh@852 -- # return 0 00:25:21.553 03:57:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:21.553 03:57:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:21.553 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.553 03:57:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.553 03:57:40 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.553 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.553 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.553 [2024-07-14 03:57:40.337568] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.553 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.553 03:57:40 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:21.553 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.553 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.553 Malloc0 00:25:21.553 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.553 03:57:40 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:21.553 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.553 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.553 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.553 03:57:40 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.553 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.553 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.553 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.553 03:57:40 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.553 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.553 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.553 [2024-07-14 03:57:40.389090] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.553 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.553 03:57:40 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:21.553 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.553 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.553 [2024-07-14 03:57:40.396822] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:21.553 [ 00:25:21.553 { 00:25:21.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:21.553 "subtype": "Discovery", 00:25:21.553 "listen_addresses": [], 00:25:21.553 "allow_any_host": true, 00:25:21.553 "hosts": [] 00:25:21.553 }, 00:25:21.553 { 00:25:21.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.553 "subtype": "NVMe", 00:25:21.553 "listen_addresses": [ 00:25:21.553 { 00:25:21.553 "transport": "TCP", 00:25:21.553 "trtype": "TCP", 00:25:21.553 "adrfam": "IPv4", 00:25:21.553 "traddr": "10.0.0.2", 00:25:21.553 "trsvcid": "4420" 00:25:21.553 } 00:25:21.553 ], 00:25:21.553 "allow_any_host": true, 00:25:21.553 "hosts": [], 00:25:21.553 "serial_number": "SPDK00000000000001", 00:25:21.553 "model_number": "SPDK bdev Controller", 00:25:21.553 "max_namespaces": 2, 00:25:21.553 "min_cntlid": 1, 00:25:21.553 "max_cntlid": 65519, 00:25:21.553 "namespaces": [ 00:25:21.553 { 00:25:21.553 "nsid": 1, 00:25:21.553 "bdev_name": "Malloc0", 00:25:21.553 "name": "Malloc0", 00:25:21.553 "nguid": "7DF25FB15D0C4907BF121DE76F02C7D5", 00:25:21.553 "uuid": "7df25fb1-5d0c-4907-bf12-1de76f02c7d5" 00:25:21.553 } 00:25:21.553 ] 00:25:21.553 } 00:25:21.553 ] 00:25:21.553 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.553 03:57:40 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:21.553 03:57:40 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:21.553 03:57:40 -- host/aer.sh@33 -- # aerpid=2461963 00:25:21.553 03:57:40 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:21.553 03:57:40 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:21.553 03:57:40 -- common/autotest_common.sh@1244 -- # local i=0 00:25:21.553 03:57:40 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.553 03:57:40 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:21.553 03:57:40 -- common/autotest_common.sh@1247 -- # i=1 00:25:21.553 03:57:40 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:21.553 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.811 03:57:40 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.811 03:57:40 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:21.811 03:57:40 -- common/autotest_common.sh@1247 -- # i=2 00:25:21.811 03:57:40 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:21.811 03:57:40 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.811 03:57:40 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.811 03:57:40 -- common/autotest_common.sh@1255 -- # return 0 00:25:21.811 03:57:40 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:21.811 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.811 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.811 Malloc1 00:25:21.811 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.811 03:57:40 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:21.811 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.811 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.811 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.811 03:57:40 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:21.811 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.811 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.811 Asynchronous Event Request test 00:25:21.811 Attaching to 10.0.0.2 00:25:21.811 Attached to 10.0.0.2 00:25:21.811 Registering asynchronous event callbacks... 00:25:21.811 Starting namespace attribute notice tests for all controllers... 00:25:21.811 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:21.811 aer_cb - Changed Namespace 00:25:21.811 Cleaning up... 00:25:21.811 [ 00:25:21.811 { 00:25:21.811 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:21.811 "subtype": "Discovery", 00:25:21.811 "listen_addresses": [], 00:25:21.811 "allow_any_host": true, 00:25:21.811 "hosts": [] 00:25:21.811 }, 00:25:21.811 { 00:25:21.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.811 "subtype": "NVMe", 00:25:21.811 "listen_addresses": [ 00:25:21.811 { 00:25:21.811 "transport": "TCP", 00:25:21.811 "trtype": "TCP", 00:25:21.811 "adrfam": "IPv4", 00:25:21.811 "traddr": "10.0.0.2", 00:25:21.811 "trsvcid": "4420" 00:25:21.811 } 00:25:21.811 ], 00:25:21.811 "allow_any_host": true, 00:25:21.811 "hosts": [], 00:25:21.811 "serial_number": "SPDK00000000000001", 00:25:21.811 "model_number": "SPDK bdev Controller", 00:25:21.811 "max_namespaces": 2, 00:25:21.811 "min_cntlid": 1, 00:25:21.811 "max_cntlid": 65519, 00:25:21.811 "namespaces": [ 00:25:21.811 { 00:25:21.811 "nsid": 1, 00:25:21.811 "bdev_name": "Malloc0", 00:25:21.811 "name": "Malloc0", 00:25:21.811 "nguid": "7DF25FB15D0C4907BF121DE76F02C7D5", 00:25:21.812 "uuid": "7df25fb1-5d0c-4907-bf12-1de76f02c7d5" 00:25:21.812 }, 00:25:21.812 { 00:25:21.812 "nsid": 2, 00:25:21.812 "bdev_name": "Malloc1", 00:25:21.812 "name": "Malloc1", 00:25:21.812 "nguid": "B24D163ACBD64971BEE3E257E7CB9879", 00:25:21.812 "uuid": "b24d163a-cbd6-4971-bee3-e257e7cb9879" 00:25:21.812 } 00:25:21.812 ] 00:25:21.812 } 00:25:21.812 ] 00:25:21.812 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.812 03:57:40 -- host/aer.sh@43 -- # wait 2461963 00:25:21.812 03:57:40 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:21.812 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.812 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.812 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.812 03:57:40 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:21.812 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.812 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:21.812 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.812 03:57:40 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.812 03:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.812 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:25:22.070 03:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.070 03:57:40 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:22.070 03:57:40 -- host/aer.sh@51 -- # nvmftestfini 00:25:22.070 03:57:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:22.070 03:57:40 -- nvmf/common.sh@116 -- # sync 00:25:22.070 03:57:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:22.070 03:57:40 -- nvmf/common.sh@119 -- # set +e 00:25:22.070 03:57:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:22.070 03:57:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:22.070 rmmod nvme_tcp 00:25:22.070 rmmod nvme_fabrics 00:25:22.070 rmmod nvme_keyring 00:25:22.070 03:57:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:22.070 03:57:40 -- nvmf/common.sh@123 -- # set -e 00:25:22.070 03:57:40 -- nvmf/common.sh@124 -- # return 0 00:25:22.070 03:57:40 -- nvmf/common.sh@477 -- # '[' -n 2461811 ']' 00:25:22.070 03:57:40 -- nvmf/common.sh@478 -- # killprocess 2461811 00:25:22.070 03:57:40 -- common/autotest_common.sh@926 -- # '[' -z 2461811 ']' 00:25:22.070 03:57:40 -- common/autotest_common.sh@930 -- # kill -0 2461811 00:25:22.070 03:57:40 -- common/autotest_common.sh@931 -- # uname 00:25:22.070 03:57:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:22.070 03:57:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2461811 00:25:22.070 03:57:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:22.070 03:57:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:22.070 03:57:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2461811' 00:25:22.070 killing process with pid 2461811 00:25:22.070 03:57:40 -- common/autotest_common.sh@945 -- # kill 2461811 00:25:22.070 [2024-07-14 03:57:40.852245] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:22.070 03:57:40 -- common/autotest_common.sh@950 -- # wait 2461811 00:25:22.329 03:57:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:22.329 03:57:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:22.329 03:57:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:22.329 03:57:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.329 03:57:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:22.329 03:57:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.329 03:57:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.329 03:57:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.232 03:57:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:24.232 00:25:24.232 real 0m6.030s 00:25:24.232 user 0m7.153s 00:25:24.232 sys 0m1.911s 00:25:24.232 03:57:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.232 03:57:43 -- common/autotest_common.sh@10 -- # set +x 00:25:24.232 ************************************ 00:25:24.232 END TEST nvmf_aer 00:25:24.232 ************************************ 00:25:24.232 03:57:43 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:24.232 03:57:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:24.232 03:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.232 03:57:43 -- common/autotest_common.sh@10 -- # set +x 00:25:24.232 ************************************ 00:25:24.232 START TEST nvmf_async_init 00:25:24.232 ************************************ 00:25:24.232 03:57:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:24.491 * Looking for test storage... 00:25:24.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.491 03:57:43 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.491 03:57:43 -- nvmf/common.sh@7 -- # uname -s 00:25:24.491 03:57:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.491 03:57:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.491 03:57:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.491 03:57:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.491 03:57:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.491 03:57:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.491 03:57:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.491 03:57:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.491 03:57:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.491 03:57:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.491 03:57:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.491 03:57:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.491 03:57:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.491 03:57:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.491 03:57:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.491 03:57:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.491 03:57:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.491 03:57:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.491 03:57:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.491 03:57:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.491 03:57:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.491 03:57:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.491 03:57:43 -- paths/export.sh@5 -- # export PATH 00:25:24.491 03:57:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.491 03:57:43 -- nvmf/common.sh@46 -- # : 0 00:25:24.491 03:57:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:24.491 03:57:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:24.491 03:57:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:24.491 03:57:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.491 03:57:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.491 03:57:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:24.491 03:57:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:24.491 03:57:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:24.491 03:57:43 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:24.491 03:57:43 -- host/async_init.sh@14 -- # null_block_size=512 00:25:24.491 03:57:43 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:24.491 03:57:43 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:24.491 03:57:43 -- host/async_init.sh@20 -- # uuidgen 00:25:24.491 03:57:43 -- host/async_init.sh@20 -- # tr -d - 00:25:24.491 03:57:43 -- host/async_init.sh@20 -- # nguid=a8822ca2b8a14c8fb117ba9ca5ec5a0b 00:25:24.491 03:57:43 -- host/async_init.sh@22 -- # nvmftestinit 00:25:24.491 03:57:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:24.491 03:57:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.491 03:57:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:24.491 03:57:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:24.491 03:57:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:24.491 03:57:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.491 03:57:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.491 03:57:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.491 03:57:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:24.491 03:57:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:24.491 03:57:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:24.491 03:57:43 -- common/autotest_common.sh@10 -- # set +x 00:25:26.393 03:57:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:26.393 03:57:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:26.393 03:57:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:26.393 03:57:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:26.393 03:57:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:26.393 03:57:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:26.393 03:57:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:26.393 03:57:45 -- nvmf/common.sh@294 -- # net_devs=() 00:25:26.393 03:57:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:26.393 03:57:45 -- nvmf/common.sh@295 -- # e810=() 00:25:26.393 03:57:45 -- nvmf/common.sh@295 -- # local -ga e810 00:25:26.393 03:57:45 -- nvmf/common.sh@296 -- # x722=() 00:25:26.393 03:57:45 -- nvmf/common.sh@296 -- # local -ga x722 00:25:26.393 03:57:45 -- nvmf/common.sh@297 -- # mlx=() 00:25:26.394 03:57:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:26.394 03:57:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.394 03:57:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:26.394 03:57:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:26.394 03:57:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:26.394 03:57:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.394 03:57:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:26.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:26.394 03:57:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.394 03:57:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:26.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:26.394 03:57:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:26.394 03:57:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.394 03:57:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.394 03:57:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.394 03:57:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.394 03:57:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:26.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:26.394 03:57:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.394 03:57:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.394 03:57:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.394 03:57:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.394 03:57:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.394 03:57:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:26.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:26.394 03:57:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.394 03:57:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:26.394 03:57:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:26.394 03:57:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:26.394 03:57:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.394 03:57:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.394 03:57:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.394 03:57:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:26.394 03:57:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.394 03:57:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.394 03:57:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:26.394 03:57:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.394 03:57:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.394 03:57:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:26.394 03:57:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:26.394 03:57:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.394 03:57:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.394 03:57:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.394 03:57:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.394 03:57:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:26.394 03:57:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.394 03:57:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.394 03:57:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.394 03:57:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:26.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:25:26.394 00:25:26.394 --- 10.0.0.2 ping statistics --- 00:25:26.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.394 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:25:26.394 03:57:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:25:26.394 00:25:26.394 --- 10.0.0.1 ping statistics --- 00:25:26.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.394 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:26.394 03:57:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.394 03:57:45 -- nvmf/common.sh@410 -- # return 0 00:25:26.394 03:57:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:26.394 03:57:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.394 03:57:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:26.394 03:57:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.394 03:57:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:26.394 03:57:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:26.653 03:57:45 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:26.653 03:57:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:26.653 03:57:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:26.653 03:57:45 -- common/autotest_common.sh@10 -- # set +x 00:25:26.653 03:57:45 -- nvmf/common.sh@469 -- # nvmfpid=2463969 00:25:26.653 03:57:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:26.653 03:57:45 -- nvmf/common.sh@470 -- # waitforlisten 2463969 00:25:26.653 03:57:45 -- common/autotest_common.sh@819 -- # '[' -z 2463969 ']' 00:25:26.653 03:57:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.653 03:57:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:26.653 03:57:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.653 03:57:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:26.653 03:57:45 -- common/autotest_common.sh@10 -- # set +x 00:25:26.653 [2024-07-14 03:57:45.389323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:26.653 [2024-07-14 03:57:45.389399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.653 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.653 [2024-07-14 03:57:45.453593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.653 [2024-07-14 03:57:45.536668] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:26.653 [2024-07-14 03:57:45.536806] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.653 [2024-07-14 03:57:45.536821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.653 [2024-07-14 03:57:45.536832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.653 [2024-07-14 03:57:45.536881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.588 03:57:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:27.588 03:57:46 -- common/autotest_common.sh@852 -- # return 0 00:25:27.588 03:57:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:27.588 03:57:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:27.588 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.588 03:57:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.588 03:57:46 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:27.588 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.588 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.588 [2024-07-14 03:57:46.353900] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.588 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.588 03:57:46 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:27.588 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.588 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.588 null0 00:25:27.588 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.588 03:57:46 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:27.588 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.588 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.588 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.588 03:57:46 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:27.588 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.588 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.588 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.588 03:57:46 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a8822ca2b8a14c8fb117ba9ca5ec5a0b 00:25:27.588 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.588 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.588 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.588 03:57:46 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:27.588 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.589 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.589 [2024-07-14 03:57:46.394171] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.589 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.589 03:57:46 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:27.589 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.589 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.847 nvme0n1 00:25:27.847 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.847 03:57:46 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:27.847 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.847 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.847 [ 00:25:27.847 { 00:25:27.847 "name": "nvme0n1", 00:25:27.847 "aliases": [ 00:25:27.847 "a8822ca2-b8a1-4c8f-b117-ba9ca5ec5a0b" 00:25:27.847 ], 00:25:27.847 "product_name": "NVMe disk", 00:25:27.847 "block_size": 512, 00:25:27.847 "num_blocks": 2097152, 00:25:27.847 "uuid": "a8822ca2-b8a1-4c8f-b117-ba9ca5ec5a0b", 00:25:27.848 "assigned_rate_limits": { 00:25:27.848 "rw_ios_per_sec": 0, 00:25:27.848 "rw_mbytes_per_sec": 0, 00:25:27.848 "r_mbytes_per_sec": 0, 00:25:27.848 "w_mbytes_per_sec": 0 00:25:27.848 }, 00:25:27.848 "claimed": false, 00:25:27.848 "zoned": false, 00:25:27.848 "supported_io_types": { 00:25:27.848 "read": true, 00:25:27.848 "write": true, 00:25:27.848 "unmap": false, 00:25:27.848 "write_zeroes": true, 00:25:27.848 "flush": true, 00:25:27.848 "reset": true, 00:25:27.848 "compare": true, 00:25:27.848 "compare_and_write": true, 00:25:27.848 "abort": true, 00:25:27.848 "nvme_admin": true, 00:25:27.848 "nvme_io": true 00:25:27.848 }, 00:25:27.848 "driver_specific": { 00:25:27.848 "nvme": [ 00:25:27.848 { 00:25:27.848 "trid": { 00:25:27.848 "trtype": "TCP", 00:25:27.848 "adrfam": "IPv4", 00:25:27.848 "traddr": "10.0.0.2", 00:25:27.848 "trsvcid": "4420", 00:25:27.848 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:27.848 }, 00:25:27.848 "ctrlr_data": { 00:25:27.848 "cntlid": 1, 00:25:27.848 "vendor_id": "0x8086", 00:25:27.848 "model_number": "SPDK bdev Controller", 00:25:27.848 "serial_number": "00000000000000000000", 00:25:27.848 "firmware_revision": "24.01.1", 00:25:27.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:27.848 "oacs": { 00:25:27.848 "security": 0, 00:25:27.848 "format": 0, 00:25:27.848 "firmware": 0, 00:25:27.848 "ns_manage": 0 00:25:27.848 }, 00:25:27.848 "multi_ctrlr": true, 00:25:27.848 "ana_reporting": false 00:25:27.848 }, 00:25:27.848 "vs": { 00:25:27.848 "nvme_version": "1.3" 00:25:27.848 }, 00:25:27.848 "ns_data": { 00:25:27.848 "id": 1, 00:25:27.848 "can_share": true 00:25:27.848 } 00:25:27.848 } 00:25:27.848 ], 00:25:27.848 "mp_policy": "active_passive" 00:25:27.848 } 00:25:27.848 } 00:25:27.848 ] 00:25:27.848 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.848 03:57:46 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:27.848 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.848 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.848 [2024-07-14 03:57:46.642742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.848 [2024-07-14 03:57:46.642833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19760b0 (9): Bad file descriptor 00:25:27.848 [2024-07-14 03:57:46.775020] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.848 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.848 03:57:46 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:27.848 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.848 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:27.848 [ 00:25:27.848 { 00:25:27.848 "name": "nvme0n1", 00:25:27.848 "aliases": [ 00:25:27.848 "a8822ca2-b8a1-4c8f-b117-ba9ca5ec5a0b" 00:25:27.848 ], 00:25:27.848 "product_name": "NVMe disk", 00:25:27.848 "block_size": 512, 00:25:27.848 "num_blocks": 2097152, 00:25:27.848 "uuid": "a8822ca2-b8a1-4c8f-b117-ba9ca5ec5a0b", 00:25:27.848 "assigned_rate_limits": { 00:25:27.848 "rw_ios_per_sec": 0, 00:25:27.848 "rw_mbytes_per_sec": 0, 00:25:27.848 "r_mbytes_per_sec": 0, 00:25:27.848 "w_mbytes_per_sec": 0 00:25:27.848 }, 00:25:27.848 "claimed": false, 00:25:27.848 "zoned": false, 00:25:27.848 "supported_io_types": { 00:25:27.848 "read": true, 00:25:27.848 "write": true, 00:25:27.848 "unmap": false, 00:25:27.848 "write_zeroes": true, 00:25:27.848 "flush": true, 00:25:27.848 "reset": true, 00:25:27.848 "compare": true, 00:25:27.848 "compare_and_write": true, 00:25:27.848 "abort": true, 00:25:27.848 "nvme_admin": true, 00:25:27.848 "nvme_io": true 00:25:27.848 }, 00:25:27.848 "driver_specific": { 00:25:27.848 "nvme": [ 00:25:27.848 { 00:25:27.848 "trid": { 00:25:27.848 "trtype": "TCP", 00:25:27.848 "adrfam": "IPv4", 00:25:27.848 "traddr": "10.0.0.2", 00:25:27.848 "trsvcid": "4420", 00:25:27.848 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:27.848 }, 00:25:27.848 "ctrlr_data": { 00:25:27.848 "cntlid": 2, 00:25:27.848 "vendor_id": "0x8086", 00:25:27.848 "model_number": "SPDK bdev Controller", 00:25:27.848 "serial_number": "00000000000000000000", 00:25:27.848 "firmware_revision": "24.01.1", 00:25:27.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:28.107 "oacs": { 00:25:28.107 "security": 0, 00:25:28.107 "format": 0, 00:25:28.107 "firmware": 0, 00:25:28.107 "ns_manage": 0 00:25:28.107 }, 00:25:28.107 "multi_ctrlr": true, 00:25:28.107 "ana_reporting": false 00:25:28.107 }, 00:25:28.107 "vs": { 00:25:28.107 "nvme_version": "1.3" 00:25:28.107 }, 00:25:28.107 "ns_data": { 00:25:28.107 "id": 1, 00:25:28.107 "can_share": true 00:25:28.107 } 00:25:28.107 } 00:25:28.107 ], 00:25:28.107 "mp_policy": "active_passive" 00:25:28.107 } 00:25:28.107 } 00:25:28.107 ] 00:25:28.107 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.107 03:57:46 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.107 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.107 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.107 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.107 03:57:46 -- host/async_init.sh@53 -- # mktemp 00:25:28.107 03:57:46 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.aVe7Y3GIYf 00:25:28.107 03:57:46 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:28.107 03:57:46 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.aVe7Y3GIYf 00:25:28.107 03:57:46 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:28.107 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.107 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.107 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.107 03:57:46 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:28.107 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.107 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.107 [2024-07-14 03:57:46.819349] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.107 [2024-07-14 03:57:46.819480] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:28.107 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.107 03:57:46 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aVe7Y3GIYf 00:25:28.107 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.107 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.107 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.107 03:57:46 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aVe7Y3GIYf 00:25:28.107 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.108 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.108 [2024-07-14 03:57:46.835379] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:28.108 nvme0n1 00:25:28.108 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.108 03:57:46 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:28.108 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.108 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.108 [ 00:25:28.108 { 00:25:28.108 "name": "nvme0n1", 00:25:28.108 "aliases": [ 00:25:28.108 "a8822ca2-b8a1-4c8f-b117-ba9ca5ec5a0b" 00:25:28.108 ], 00:25:28.108 "product_name": "NVMe disk", 00:25:28.108 "block_size": 512, 00:25:28.108 "num_blocks": 2097152, 00:25:28.108 "uuid": "a8822ca2-b8a1-4c8f-b117-ba9ca5ec5a0b", 00:25:28.108 "assigned_rate_limits": { 00:25:28.108 "rw_ios_per_sec": 0, 00:25:28.108 "rw_mbytes_per_sec": 0, 00:25:28.108 "r_mbytes_per_sec": 0, 00:25:28.108 "w_mbytes_per_sec": 0 00:25:28.108 }, 00:25:28.108 "claimed": false, 00:25:28.108 "zoned": false, 00:25:28.108 "supported_io_types": { 00:25:28.108 "read": true, 00:25:28.108 "write": true, 00:25:28.108 "unmap": false, 00:25:28.108 "write_zeroes": true, 00:25:28.108 "flush": true, 00:25:28.108 "reset": true, 00:25:28.108 "compare": true, 00:25:28.108 "compare_and_write": true, 00:25:28.108 "abort": true, 00:25:28.108 "nvme_admin": true, 00:25:28.108 "nvme_io": true 00:25:28.108 }, 00:25:28.108 "driver_specific": { 00:25:28.108 "nvme": [ 00:25:28.108 { 00:25:28.108 "trid": { 00:25:28.108 "trtype": "TCP", 00:25:28.108 "adrfam": "IPv4", 00:25:28.108 "traddr": "10.0.0.2", 00:25:28.108 "trsvcid": "4421", 00:25:28.108 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:28.108 }, 00:25:28.108 "ctrlr_data": { 00:25:28.108 "cntlid": 3, 00:25:28.108 "vendor_id": "0x8086", 00:25:28.108 "model_number": "SPDK bdev Controller", 00:25:28.108 "serial_number": "00000000000000000000", 00:25:28.108 "firmware_revision": "24.01.1", 00:25:28.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:28.108 "oacs": { 00:25:28.108 "security": 0, 00:25:28.108 "format": 0, 00:25:28.108 "firmware": 0, 00:25:28.108 "ns_manage": 0 00:25:28.108 }, 00:25:28.108 "multi_ctrlr": true, 00:25:28.108 "ana_reporting": false 00:25:28.108 }, 00:25:28.108 "vs": { 00:25:28.108 "nvme_version": "1.3" 00:25:28.108 }, 00:25:28.108 "ns_data": { 00:25:28.108 "id": 1, 00:25:28.108 "can_share": true 00:25:28.108 } 00:25:28.108 } 00:25:28.108 ], 00:25:28.108 "mp_policy": "active_passive" 00:25:28.108 } 00:25:28.108 } 00:25:28.108 ] 00:25:28.108 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.108 03:57:46 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.108 03:57:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.108 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.108 03:57:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.108 03:57:46 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.aVe7Y3GIYf 00:25:28.108 03:57:46 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:28.108 03:57:46 -- host/async_init.sh@78 -- # nvmftestfini 00:25:28.108 03:57:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:28.108 03:57:46 -- nvmf/common.sh@116 -- # sync 00:25:28.108 03:57:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:28.108 03:57:46 -- nvmf/common.sh@119 -- # set +e 00:25:28.108 03:57:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:28.108 03:57:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:28.108 rmmod nvme_tcp 00:25:28.108 rmmod nvme_fabrics 00:25:28.108 rmmod nvme_keyring 00:25:28.108 03:57:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:28.108 03:57:46 -- nvmf/common.sh@123 -- # set -e 00:25:28.108 03:57:46 -- nvmf/common.sh@124 -- # return 0 00:25:28.108 03:57:46 -- nvmf/common.sh@477 -- # '[' -n 2463969 ']' 00:25:28.108 03:57:46 -- nvmf/common.sh@478 -- # killprocess 2463969 00:25:28.108 03:57:46 -- common/autotest_common.sh@926 -- # '[' -z 2463969 ']' 00:25:28.108 03:57:46 -- common/autotest_common.sh@930 -- # kill -0 2463969 00:25:28.108 03:57:46 -- common/autotest_common.sh@931 -- # uname 00:25:28.108 03:57:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:28.108 03:57:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2463969 00:25:28.108 03:57:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:28.108 03:57:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:28.108 03:57:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2463969' 00:25:28.108 killing process with pid 2463969 00:25:28.108 03:57:47 -- common/autotest_common.sh@945 -- # kill 2463969 00:25:28.108 03:57:47 -- common/autotest_common.sh@950 -- # wait 2463969 00:25:28.368 03:57:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:28.368 03:57:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:28.368 03:57:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:28.368 03:57:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.368 03:57:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:28.368 03:57:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.368 03:57:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.368 03:57:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.927 03:57:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:30.927 00:25:30.927 real 0m6.111s 00:25:30.927 user 0m2.936s 00:25:30.927 sys 0m1.765s 00:25:30.927 03:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.927 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:25:30.927 ************************************ 00:25:30.927 END TEST nvmf_async_init 00:25:30.927 ************************************ 00:25:30.927 03:57:49 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:30.927 03:57:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:30.927 03:57:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.927 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:25:30.927 ************************************ 00:25:30.927 START TEST dma 00:25:30.927 ************************************ 00:25:30.927 03:57:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:30.927 * Looking for test storage... 00:25:30.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.927 03:57:49 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.927 03:57:49 -- nvmf/common.sh@7 -- # uname -s 00:25:30.927 03:57:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.927 03:57:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.927 03:57:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.927 03:57:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.927 03:57:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.927 03:57:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.927 03:57:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.927 03:57:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.927 03:57:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.927 03:57:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.927 03:57:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.927 03:57:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.927 03:57:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.927 03:57:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.927 03:57:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.927 03:57:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.927 03:57:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.927 03:57:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.927 03:57:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.927 03:57:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.927 03:57:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.927 03:57:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.927 03:57:49 -- paths/export.sh@5 -- # export PATH 00:25:30.927 03:57:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.927 03:57:49 -- nvmf/common.sh@46 -- # : 0 00:25:30.927 03:57:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:30.927 03:57:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:30.927 03:57:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:30.927 03:57:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.927 03:57:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.927 03:57:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:30.927 03:57:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:30.927 03:57:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:30.927 03:57:49 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:30.927 03:57:49 -- host/dma.sh@13 -- # exit 0 00:25:30.927 00:25:30.927 real 0m0.066s 00:25:30.927 user 0m0.024s 00:25:30.927 sys 0m0.047s 00:25:30.927 03:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.927 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:25:30.927 ************************************ 00:25:30.927 END TEST dma 00:25:30.927 ************************************ 00:25:30.927 03:57:49 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:30.927 03:57:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:30.927 03:57:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.927 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:25:30.927 ************************************ 00:25:30.927 START TEST nvmf_identify 00:25:30.927 ************************************ 00:25:30.927 03:57:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:30.927 * Looking for test storage... 00:25:30.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.927 03:57:49 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.927 03:57:49 -- nvmf/common.sh@7 -- # uname -s 00:25:30.927 03:57:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.927 03:57:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.927 03:57:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.927 03:57:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.927 03:57:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.927 03:57:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.927 03:57:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.927 03:57:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.927 03:57:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.927 03:57:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.928 03:57:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.928 03:57:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.928 03:57:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.928 03:57:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.928 03:57:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.928 03:57:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.928 03:57:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.928 03:57:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.928 03:57:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.928 03:57:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.928 03:57:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.928 03:57:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.928 03:57:49 -- paths/export.sh@5 -- # export PATH 00:25:30.928 03:57:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.928 03:57:49 -- nvmf/common.sh@46 -- # : 0 00:25:30.928 03:57:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:30.928 03:57:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:30.928 03:57:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:30.928 03:57:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.928 03:57:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.928 03:57:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:30.928 03:57:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:30.928 03:57:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:30.928 03:57:49 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:30.928 03:57:49 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:30.928 03:57:49 -- host/identify.sh@14 -- # nvmftestinit 00:25:30.928 03:57:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:30.928 03:57:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.928 03:57:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:30.928 03:57:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:30.928 03:57:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:30.928 03:57:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.928 03:57:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.928 03:57:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.928 03:57:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:30.928 03:57:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:30.928 03:57:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:30.928 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:25:32.828 03:57:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:32.828 03:57:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:32.828 03:57:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:32.828 03:57:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:32.828 03:57:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:32.828 03:57:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:32.828 03:57:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:32.828 03:57:51 -- nvmf/common.sh@294 -- # net_devs=() 00:25:32.828 03:57:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:32.828 03:57:51 -- nvmf/common.sh@295 -- # e810=() 00:25:32.828 03:57:51 -- nvmf/common.sh@295 -- # local -ga e810 00:25:32.828 03:57:51 -- nvmf/common.sh@296 -- # x722=() 00:25:32.828 03:57:51 -- nvmf/common.sh@296 -- # local -ga x722 00:25:32.828 03:57:51 -- nvmf/common.sh@297 -- # mlx=() 00:25:32.828 03:57:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:32.828 03:57:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.828 03:57:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:32.828 03:57:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:32.828 03:57:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:32.828 03:57:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.828 03:57:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.828 03:57:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.828 03:57:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.828 03:57:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:32.828 03:57:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.828 03:57:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.828 03:57:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.828 03:57:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.828 03:57:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.828 03:57:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.828 03:57:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.828 03:57:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.828 03:57:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.828 03:57:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.828 03:57:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.828 03:57:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.828 03:57:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:32.828 03:57:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:32.828 03:57:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:32.828 03:57:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:32.828 03:57:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.828 03:57:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.828 03:57:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.828 03:57:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:32.828 03:57:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.828 03:57:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.828 03:57:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:32.828 03:57:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.828 03:57:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.828 03:57:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:32.829 03:57:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:32.829 03:57:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.829 03:57:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.829 03:57:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.829 03:57:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.829 03:57:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:32.829 03:57:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.829 03:57:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.829 03:57:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.829 03:57:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:32.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:25:32.829 00:25:32.829 --- 10.0.0.2 ping statistics --- 00:25:32.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.829 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:25:32.829 03:57:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:25:32.829 00:25:32.829 --- 10.0.0.1 ping statistics --- 00:25:32.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.829 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:25:32.829 03:57:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.829 03:57:51 -- nvmf/common.sh@410 -- # return 0 00:25:32.829 03:57:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:32.829 03:57:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.829 03:57:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:32.829 03:57:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:32.829 03:57:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.829 03:57:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:32.829 03:57:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:32.829 03:57:51 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:32.829 03:57:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:32.829 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.829 03:57:51 -- host/identify.sh@19 -- # nvmfpid=2466119 00:25:32.829 03:57:51 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:32.829 03:57:51 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.829 03:57:51 -- host/identify.sh@23 -- # waitforlisten 2466119 00:25:32.829 03:57:51 -- common/autotest_common.sh@819 -- # '[' -z 2466119 ']' 00:25:32.829 03:57:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.829 03:57:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:32.829 03:57:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.829 03:57:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:32.829 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.829 [2024-07-14 03:57:51.499810] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:32.829 [2024-07-14 03:57:51.499928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.829 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.829 [2024-07-14 03:57:51.570461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.829 [2024-07-14 03:57:51.660645] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:32.829 [2024-07-14 03:57:51.660815] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.829 [2024-07-14 03:57:51.660834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.829 [2024-07-14 03:57:51.660847] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.829 [2024-07-14 03:57:51.660941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.829 [2024-07-14 03:57:51.660997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.829 [2024-07-14 03:57:51.661122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.829 [2024-07-14 03:57:51.661124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.766 03:57:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:33.766 03:57:52 -- common/autotest_common.sh@852 -- # return 0 00:25:33.766 03:57:52 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.766 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 [2024-07-14 03:57:52.471423] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.766 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.766 03:57:52 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:33.766 03:57:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 03:57:52 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.766 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 Malloc0 00:25:33.766 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.766 03:57:52 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.766 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.766 03:57:52 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:33.766 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.766 03:57:52 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.766 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 [2024-07-14 03:57:52.542824] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.766 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.766 03:57:52 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:33.766 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.766 03:57:52 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:33.766 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.766 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 [2024-07-14 03:57:52.558618] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:33.766 [ 00:25:33.766 { 00:25:33.766 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:33.766 "subtype": "Discovery", 00:25:33.766 "listen_addresses": [ 00:25:33.766 { 00:25:33.766 "transport": "TCP", 00:25:33.766 "trtype": "TCP", 00:25:33.766 "adrfam": "IPv4", 00:25:33.766 "traddr": "10.0.0.2", 00:25:33.766 "trsvcid": "4420" 00:25:33.766 } 00:25:33.766 ], 00:25:33.766 "allow_any_host": true, 00:25:33.766 "hosts": [] 00:25:33.766 }, 00:25:33.766 { 00:25:33.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.766 "subtype": "NVMe", 00:25:33.766 "listen_addresses": [ 00:25:33.766 { 00:25:33.766 "transport": "TCP", 00:25:33.766 "trtype": "TCP", 00:25:33.766 "adrfam": "IPv4", 00:25:33.766 "traddr": "10.0.0.2", 00:25:33.766 "trsvcid": "4420" 00:25:33.766 } 00:25:33.766 ], 00:25:33.766 "allow_any_host": true, 00:25:33.766 "hosts": [], 00:25:33.766 "serial_number": "SPDK00000000000001", 00:25:33.766 "model_number": "SPDK bdev Controller", 00:25:33.766 "max_namespaces": 32, 00:25:33.766 "min_cntlid": 1, 00:25:33.766 "max_cntlid": 65519, 00:25:33.766 "namespaces": [ 00:25:33.766 { 00:25:33.766 "nsid": 1, 00:25:33.766 "bdev_name": "Malloc0", 00:25:33.766 "name": "Malloc0", 00:25:33.766 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:33.766 "eui64": "ABCDEF0123456789", 00:25:33.766 "uuid": "70e3c19e-8279-424a-be50-34051b0108d6" 00:25:33.766 } 00:25:33.766 ] 00:25:33.766 } 00:25:33.766 ] 00:25:33.766 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.766 03:57:52 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:33.766 [2024-07-14 03:57:52.579653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:33.766 [2024-07-14 03:57:52.579691] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466279 ] 00:25:33.766 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.766 [2024-07-14 03:57:52.614128] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:33.766 [2024-07-14 03:57:52.614207] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:33.766 [2024-07-14 03:57:52.614217] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:33.766 [2024-07-14 03:57:52.614233] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:33.766 [2024-07-14 03:57:52.614246] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:33.766 [2024-07-14 03:57:52.614656] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:33.766 [2024-07-14 03:57:52.614719] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x106e5a0 0 00:25:33.766 [2024-07-14 03:57:52.628884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:33.766 [2024-07-14 03:57:52.628907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:33.766 [2024-07-14 03:57:52.628917] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:33.766 [2024-07-14 03:57:52.628924] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:33.766 [2024-07-14 03:57:52.628975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.766 [2024-07-14 03:57:52.628989] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.766 [2024-07-14 03:57:52.628997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.766 [2024-07-14 03:57:52.629016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:33.766 [2024-07-14 03:57:52.629044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.766 [2024-07-14 03:57:52.636880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.766 [2024-07-14 03:57:52.636899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.766 [2024-07-14 03:57:52.636907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.766 [2024-07-14 03:57:52.636915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.766 [2024-07-14 03:57:52.636936] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:33.767 [2024-07-14 03:57:52.636948] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:33.767 [2024-07-14 03:57:52.636958] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:33.767 [2024-07-14 03:57:52.636978] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.636987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.636994] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.767 [2024-07-14 03:57:52.637005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.767 [2024-07-14 03:57:52.637029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.767 [2024-07-14 03:57:52.637213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.767 [2024-07-14 03:57:52.637228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.767 [2024-07-14 03:57:52.637235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.767 [2024-07-14 03:57:52.637254] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:33.767 [2024-07-14 03:57:52.637267] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:33.767 [2024-07-14 03:57:52.637280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.767 [2024-07-14 03:57:52.637306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.767 [2024-07-14 03:57:52.637342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.767 [2024-07-14 03:57:52.637568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.767 [2024-07-14 03:57:52.637584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.767 [2024-07-14 03:57:52.637596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637603] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.767 [2024-07-14 03:57:52.637614] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:33.767 [2024-07-14 03:57:52.637629] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:33.767 [2024-07-14 03:57:52.637642] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637656] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.767 [2024-07-14 03:57:52.637667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.767 [2024-07-14 03:57:52.637688] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.767 [2024-07-14 03:57:52.637874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.767 [2024-07-14 03:57:52.637890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.767 [2024-07-14 03:57:52.637897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.767 [2024-07-14 03:57:52.637915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:33.767 [2024-07-14 03:57:52.637932] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.637948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.767 [2024-07-14 03:57:52.637959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.767 [2024-07-14 03:57:52.637980] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.767 [2024-07-14 03:57:52.638127] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.767 [2024-07-14 03:57:52.638142] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.767 [2024-07-14 03:57:52.638149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638156] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.767 [2024-07-14 03:57:52.638167] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:33.767 [2024-07-14 03:57:52.638176] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:33.767 [2024-07-14 03:57:52.638189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:33.767 [2024-07-14 03:57:52.638299] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:33.767 [2024-07-14 03:57:52.638309] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:33.767 [2024-07-14 03:57:52.638324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.767 [2024-07-14 03:57:52.638349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.767 [2024-07-14 03:57:52.638375] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.767 [2024-07-14 03:57:52.638559] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.767 [2024-07-14 03:57:52.638574] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.767 [2024-07-14 03:57:52.638581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.767 [2024-07-14 03:57:52.638599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:33.767 [2024-07-14 03:57:52.638616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638625] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.767 [2024-07-14 03:57:52.638642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.767 [2024-07-14 03:57:52.638663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.767 [2024-07-14 03:57:52.638802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.767 [2024-07-14 03:57:52.638817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.767 [2024-07-14 03:57:52.638824] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638831] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.767 [2024-07-14 03:57:52.638841] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:33.767 [2024-07-14 03:57:52.638850] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:33.767 [2024-07-14 03:57:52.638863] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:33.767 [2024-07-14 03:57:52.638899] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:33.767 [2024-07-14 03:57:52.638918] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.638933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.767 [2024-07-14 03:57:52.638944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.767 [2024-07-14 03:57:52.638967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.767 [2024-07-14 03:57:52.639139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.767 [2024-07-14 03:57:52.639160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.767 [2024-07-14 03:57:52.639167] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.767 [2024-07-14 03:57:52.639175] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x106e5a0): datao=0, datal=4096, cccid=0 00:25:33.767 [2024-07-14 03:57:52.639183] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10d93e0) on tqpair(0x106e5a0): expected_datao=0, payload_size=4096 00:25:33.768 [2024-07-14 03:57:52.639230] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639241] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.768 [2024-07-14 03:57:52.639391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.768 [2024-07-14 03:57:52.639398] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.768 [2024-07-14 03:57:52.639424] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:33.768 [2024-07-14 03:57:52.639434] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:33.768 [2024-07-14 03:57:52.639442] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:33.768 [2024-07-14 03:57:52.639451] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:33.768 [2024-07-14 03:57:52.639459] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:33.768 [2024-07-14 03:57:52.639468] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:33.768 [2024-07-14 03:57:52.639487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:33.768 [2024-07-14 03:57:52.639501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.639527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:33.768 [2024-07-14 03:57:52.639563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.768 [2024-07-14 03:57:52.639796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.768 [2024-07-14 03:57:52.639812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.768 [2024-07-14 03:57:52.639819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639826] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d93e0) on tqpair=0x106e5a0 00:25:33.768 [2024-07-14 03:57:52.639840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639855] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.639875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.768 [2024-07-14 03:57:52.639887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639895] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.639910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.768 [2024-07-14 03:57:52.639920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.639942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.768 [2024-07-14 03:57:52.639952] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639959] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.639965] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.639974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.768 [2024-07-14 03:57:52.639987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:33.768 [2024-07-14 03:57:52.640007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:33.768 [2024-07-14 03:57:52.640019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.640027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.640033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.640044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.768 [2024-07-14 03:57:52.640067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d93e0, cid 0, qid 0 00:25:33.768 [2024-07-14 03:57:52.640079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9540, cid 1, qid 0 00:25:33.768 [2024-07-14 03:57:52.640087] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d96a0, cid 2, qid 0 00:25:33.768 [2024-07-14 03:57:52.640095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9800, cid 3, qid 0 00:25:33.768 [2024-07-14 03:57:52.640103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9960, cid 4, qid 0 00:25:33.768 [2024-07-14 03:57:52.640286] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.768 [2024-07-14 03:57:52.640301] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.768 [2024-07-14 03:57:52.640308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.640315] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9960) on tqpair=0x106e5a0 00:25:33.768 [2024-07-14 03:57:52.640325] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:33.768 [2024-07-14 03:57:52.640335] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:33.768 [2024-07-14 03:57:52.640353] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.640376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.640383] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.640393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.768 [2024-07-14 03:57:52.640414] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9960, cid 4, qid 0 00:25:33.768 [2024-07-14 03:57:52.640618] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.768 [2024-07-14 03:57:52.640631] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.768 [2024-07-14 03:57:52.640638] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.640644] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x106e5a0): datao=0, datal=4096, cccid=4 00:25:33.768 [2024-07-14 03:57:52.640652] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10d9960) on tqpair(0x106e5a0): expected_datao=0, payload_size=4096 00:25:33.768 [2024-07-14 03:57:52.640688] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.640697] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.684886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.768 [2024-07-14 03:57:52.684906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.768 [2024-07-14 03:57:52.684914] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.684921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9960) on tqpair=0x106e5a0 00:25:33.768 [2024-07-14 03:57:52.684943] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:33.768 [2024-07-14 03:57:52.685000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.685011] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.685018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.685030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.768 [2024-07-14 03:57:52.685042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.685049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:33.768 [2024-07-14 03:57:52.685055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x106e5a0) 00:25:33.768 [2024-07-14 03:57:52.685064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.768 [2024-07-14 03:57:52.685094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9960, cid 4, qid 0 00:25:33.769 [2024-07-14 03:57:52.685106] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9ac0, cid 5, qid 0 00:25:33.769 [2024-07-14 03:57:52.685430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:33.769 [2024-07-14 03:57:52.685446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:33.769 [2024-07-14 03:57:52.685453] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:33.769 [2024-07-14 03:57:52.685459] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x106e5a0): datao=0, datal=1024, cccid=4 00:25:33.769 [2024-07-14 03:57:52.685467] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10d9960) on tqpair(0x106e5a0): expected_datao=0, payload_size=1024 00:25:33.769 [2024-07-14 03:57:52.685477] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:33.769 [2024-07-14 03:57:52.685485] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:33.769 [2024-07-14 03:57:52.685493] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:33.769 [2024-07-14 03:57:52.685502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:33.769 [2024-07-14 03:57:52.685508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:33.769 [2024-07-14 03:57:52.685515] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9ac0) on tqpair=0x106e5a0 00:25:34.032 [2024-07-14 03:57:52.726037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.032 [2024-07-14 03:57:52.726058] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.032 [2024-07-14 03:57:52.726066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9960) on tqpair=0x106e5a0 00:25:34.032 [2024-07-14 03:57:52.726093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726109] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x106e5a0) 00:25:34.032 [2024-07-14 03:57:52.726120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.032 [2024-07-14 03:57:52.726159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9960, cid 4, qid 0 00:25:34.032 [2024-07-14 03:57:52.726328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.032 [2024-07-14 03:57:52.726340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.032 [2024-07-14 03:57:52.726348] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726354] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x106e5a0): datao=0, datal=3072, cccid=4 00:25:34.032 [2024-07-14 03:57:52.726362] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10d9960) on tqpair(0x106e5a0): expected_datao=0, payload_size=3072 00:25:34.032 [2024-07-14 03:57:52.726381] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726390] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.032 [2024-07-14 03:57:52.726450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.032 [2024-07-14 03:57:52.726456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726463] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9960) on tqpair=0x106e5a0 00:25:34.032 [2024-07-14 03:57:52.726479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726495] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x106e5a0) 00:25:34.032 [2024-07-14 03:57:52.726505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.032 [2024-07-14 03:57:52.726533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9960, cid 4, qid 0 00:25:34.032 [2024-07-14 03:57:52.726686] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.032 [2024-07-14 03:57:52.726699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.032 [2024-07-14 03:57:52.726706] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726713] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x106e5a0): datao=0, datal=8, cccid=4 00:25:34.032 [2024-07-14 03:57:52.726720] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10d9960) on tqpair(0x106e5a0): expected_datao=0, payload_size=8 00:25:34.032 [2024-07-14 03:57:52.726731] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.726738] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.767026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.032 [2024-07-14 03:57:52.767046] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.032 [2024-07-14 03:57:52.767054] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.032 [2024-07-14 03:57:52.767061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9960) on tqpair=0x106e5a0 00:25:34.032 ===================================================== 00:25:34.032 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:34.032 ===================================================== 00:25:34.032 Controller Capabilities/Features 00:25:34.032 ================================ 00:25:34.032 Vendor ID: 0000 00:25:34.032 Subsystem Vendor ID: 0000 00:25:34.032 Serial Number: .................... 00:25:34.032 Model Number: ........................................ 00:25:34.032 Firmware Version: 24.01.1 00:25:34.032 Recommended Arb Burst: 0 00:25:34.032 IEEE OUI Identifier: 00 00 00 00:25:34.032 Multi-path I/O 00:25:34.032 May have multiple subsystem ports: No 00:25:34.032 May have multiple controllers: No 00:25:34.032 Associated with SR-IOV VF: No 00:25:34.032 Max Data Transfer Size: 131072 00:25:34.032 Max Number of Namespaces: 0 00:25:34.032 Max Number of I/O Queues: 1024 00:25:34.032 NVMe Specification Version (VS): 1.3 00:25:34.032 NVMe Specification Version (Identify): 1.3 00:25:34.032 Maximum Queue Entries: 128 00:25:34.032 Contiguous Queues Required: Yes 00:25:34.032 Arbitration Mechanisms Supported 00:25:34.032 Weighted Round Robin: Not Supported 00:25:34.032 Vendor Specific: Not Supported 00:25:34.033 Reset Timeout: 15000 ms 00:25:34.033 Doorbell Stride: 4 bytes 00:25:34.033 NVM Subsystem Reset: Not Supported 00:25:34.033 Command Sets Supported 00:25:34.033 NVM Command Set: Supported 00:25:34.033 Boot Partition: Not Supported 00:25:34.033 Memory Page Size Minimum: 4096 bytes 00:25:34.033 Memory Page Size Maximum: 4096 bytes 00:25:34.033 Persistent Memory Region: Not Supported 00:25:34.033 Optional Asynchronous Events Supported 00:25:34.033 Namespace Attribute Notices: Not Supported 00:25:34.033 Firmware Activation Notices: Not Supported 00:25:34.033 ANA Change Notices: Not Supported 00:25:34.033 PLE Aggregate Log Change Notices: Not Supported 00:25:34.033 LBA Status Info Alert Notices: Not Supported 00:25:34.033 EGE Aggregate Log Change Notices: Not Supported 00:25:34.033 Normal NVM Subsystem Shutdown event: Not Supported 00:25:34.033 Zone Descriptor Change Notices: Not Supported 00:25:34.033 Discovery Log Change Notices: Supported 00:25:34.033 Controller Attributes 00:25:34.033 128-bit Host Identifier: Not Supported 00:25:34.033 Non-Operational Permissive Mode: Not Supported 00:25:34.033 NVM Sets: Not Supported 00:25:34.033 Read Recovery Levels: Not Supported 00:25:34.033 Endurance Groups: Not Supported 00:25:34.033 Predictable Latency Mode: Not Supported 00:25:34.033 Traffic Based Keep ALive: Not Supported 00:25:34.033 Namespace Granularity: Not Supported 00:25:34.033 SQ Associations: Not Supported 00:25:34.033 UUID List: Not Supported 00:25:34.033 Multi-Domain Subsystem: Not Supported 00:25:34.033 Fixed Capacity Management: Not Supported 00:25:34.033 Variable Capacity Management: Not Supported 00:25:34.033 Delete Endurance Group: Not Supported 00:25:34.033 Delete NVM Set: Not Supported 00:25:34.033 Extended LBA Formats Supported: Not Supported 00:25:34.033 Flexible Data Placement Supported: Not Supported 00:25:34.033 00:25:34.033 Controller Memory Buffer Support 00:25:34.033 ================================ 00:25:34.033 Supported: No 00:25:34.033 00:25:34.033 Persistent Memory Region Support 00:25:34.033 ================================ 00:25:34.033 Supported: No 00:25:34.033 00:25:34.033 Admin Command Set Attributes 00:25:34.033 ============================ 00:25:34.033 Security Send/Receive: Not Supported 00:25:34.033 Format NVM: Not Supported 00:25:34.033 Firmware Activate/Download: Not Supported 00:25:34.033 Namespace Management: Not Supported 00:25:34.033 Device Self-Test: Not Supported 00:25:34.033 Directives: Not Supported 00:25:34.033 NVMe-MI: Not Supported 00:25:34.033 Virtualization Management: Not Supported 00:25:34.033 Doorbell Buffer Config: Not Supported 00:25:34.033 Get LBA Status Capability: Not Supported 00:25:34.033 Command & Feature Lockdown Capability: Not Supported 00:25:34.033 Abort Command Limit: 1 00:25:34.033 Async Event Request Limit: 4 00:25:34.033 Number of Firmware Slots: N/A 00:25:34.033 Firmware Slot 1 Read-Only: N/A 00:25:34.033 Firmware Activation Without Reset: N/A 00:25:34.033 Multiple Update Detection Support: N/A 00:25:34.033 Firmware Update Granularity: No Information Provided 00:25:34.033 Per-Namespace SMART Log: No 00:25:34.033 Asymmetric Namespace Access Log Page: Not Supported 00:25:34.033 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:34.033 Command Effects Log Page: Not Supported 00:25:34.033 Get Log Page Extended Data: Supported 00:25:34.033 Telemetry Log Pages: Not Supported 00:25:34.033 Persistent Event Log Pages: Not Supported 00:25:34.033 Supported Log Pages Log Page: May Support 00:25:34.033 Commands Supported & Effects Log Page: Not Supported 00:25:34.033 Feature Identifiers & Effects Log Page:May Support 00:25:34.033 NVMe-MI Commands & Effects Log Page: May Support 00:25:34.033 Data Area 4 for Telemetry Log: Not Supported 00:25:34.033 Error Log Page Entries Supported: 128 00:25:34.033 Keep Alive: Not Supported 00:25:34.033 00:25:34.033 NVM Command Set Attributes 00:25:34.033 ========================== 00:25:34.033 Submission Queue Entry Size 00:25:34.033 Max: 1 00:25:34.033 Min: 1 00:25:34.033 Completion Queue Entry Size 00:25:34.033 Max: 1 00:25:34.033 Min: 1 00:25:34.033 Number of Namespaces: 0 00:25:34.033 Compare Command: Not Supported 00:25:34.033 Write Uncorrectable Command: Not Supported 00:25:34.033 Dataset Management Command: Not Supported 00:25:34.033 Write Zeroes Command: Not Supported 00:25:34.033 Set Features Save Field: Not Supported 00:25:34.033 Reservations: Not Supported 00:25:34.033 Timestamp: Not Supported 00:25:34.033 Copy: Not Supported 00:25:34.033 Volatile Write Cache: Not Present 00:25:34.033 Atomic Write Unit (Normal): 1 00:25:34.033 Atomic Write Unit (PFail): 1 00:25:34.033 Atomic Compare & Write Unit: 1 00:25:34.033 Fused Compare & Write: Supported 00:25:34.033 Scatter-Gather List 00:25:34.033 SGL Command Set: Supported 00:25:34.033 SGL Keyed: Supported 00:25:34.033 SGL Bit Bucket Descriptor: Not Supported 00:25:34.033 SGL Metadata Pointer: Not Supported 00:25:34.033 Oversized SGL: Not Supported 00:25:34.033 SGL Metadata Address: Not Supported 00:25:34.033 SGL Offset: Supported 00:25:34.033 Transport SGL Data Block: Not Supported 00:25:34.033 Replay Protected Memory Block: Not Supported 00:25:34.033 00:25:34.033 Firmware Slot Information 00:25:34.033 ========================= 00:25:34.033 Active slot: 0 00:25:34.033 00:25:34.033 00:25:34.033 Error Log 00:25:34.033 ========= 00:25:34.033 00:25:34.033 Active Namespaces 00:25:34.033 ================= 00:25:34.033 Discovery Log Page 00:25:34.033 ================== 00:25:34.033 Generation Counter: 2 00:25:34.033 Number of Records: 2 00:25:34.033 Record Format: 0 00:25:34.033 00:25:34.033 Discovery Log Entry 0 00:25:34.033 ---------------------- 00:25:34.033 Transport Type: 3 (TCP) 00:25:34.033 Address Family: 1 (IPv4) 00:25:34.033 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:34.033 Entry Flags: 00:25:34.033 Duplicate Returned Information: 1 00:25:34.033 Explicit Persistent Connection Support for Discovery: 1 00:25:34.033 Transport Requirements: 00:25:34.033 Secure Channel: Not Required 00:25:34.033 Port ID: 0 (0x0000) 00:25:34.034 Controller ID: 65535 (0xffff) 00:25:34.034 Admin Max SQ Size: 128 00:25:34.034 Transport Service Identifier: 4420 00:25:34.034 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:34.034 Transport Address: 10.0.0.2 00:25:34.034 Discovery Log Entry 1 00:25:34.034 ---------------------- 00:25:34.034 Transport Type: 3 (TCP) 00:25:34.034 Address Family: 1 (IPv4) 00:25:34.034 Subsystem Type: 2 (NVM Subsystem) 00:25:34.034 Entry Flags: 00:25:34.034 Duplicate Returned Information: 0 00:25:34.034 Explicit Persistent Connection Support for Discovery: 0 00:25:34.034 Transport Requirements: 00:25:34.034 Secure Channel: Not Required 00:25:34.034 Port ID: 0 (0x0000) 00:25:34.034 Controller ID: 65535 (0xffff) 00:25:34.034 Admin Max SQ Size: 128 00:25:34.034 Transport Service Identifier: 4420 00:25:34.034 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:34.034 Transport Address: 10.0.0.2 [2024-07-14 03:57:52.767183] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:34.034 [2024-07-14 03:57:52.767211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.034 [2024-07-14 03:57:52.767224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.034 [2024-07-14 03:57:52.767234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.034 [2024-07-14 03:57:52.767244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.034 [2024-07-14 03:57:52.767258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.767266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.767273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x106e5a0) 00:25:34.034 [2024-07-14 03:57:52.767284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.034 [2024-07-14 03:57:52.767323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9800, cid 3, qid 0 00:25:34.034 [2024-07-14 03:57:52.767540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.034 [2024-07-14 03:57:52.767556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.034 [2024-07-14 03:57:52.767562] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.767573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9800) on tqpair=0x106e5a0 00:25:34.034 [2024-07-14 03:57:52.767588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.767596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.767603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x106e5a0) 00:25:34.034 [2024-07-14 03:57:52.767613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.034 [2024-07-14 03:57:52.767640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9800, cid 3, qid 0 00:25:34.034 [2024-07-14 03:57:52.767824] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.034 [2024-07-14 03:57:52.767836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.034 [2024-07-14 03:57:52.767843] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.767850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9800) on tqpair=0x106e5a0 00:25:34.034 [2024-07-14 03:57:52.767860] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:34.034 [2024-07-14 03:57:52.771879] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:34.034 [2024-07-14 03:57:52.771903] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.771927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.771934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x106e5a0) 00:25:34.034 [2024-07-14 03:57:52.771945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.034 [2024-07-14 03:57:52.771968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10d9800, cid 3, qid 0 00:25:34.034 [2024-07-14 03:57:52.772155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.034 [2024-07-14 03:57:52.772170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.034 [2024-07-14 03:57:52.772177] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.772184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10d9800) on tqpair=0x106e5a0 00:25:34.034 [2024-07-14 03:57:52.772199] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:25:34.034 00:25:34.034 03:57:52 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:34.034 [2024-07-14 03:57:52.802120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:34.034 [2024-07-14 03:57:52.802185] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466286 ] 00:25:34.034 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.034 [2024-07-14 03:57:52.833775] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:34.034 [2024-07-14 03:57:52.833817] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:34.034 [2024-07-14 03:57:52.833827] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:34.034 [2024-07-14 03:57:52.833841] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:34.034 [2024-07-14 03:57:52.833858] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:34.034 [2024-07-14 03:57:52.837926] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:34.034 [2024-07-14 03:57:52.837970] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f065a0 0 00:25:34.034 [2024-07-14 03:57:52.845891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:34.034 [2024-07-14 03:57:52.845912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:34.034 [2024-07-14 03:57:52.845921] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:34.034 [2024-07-14 03:57:52.845927] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:34.034 [2024-07-14 03:57:52.845964] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.845975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.845982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.034 [2024-07-14 03:57:52.845996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:34.034 [2024-07-14 03:57:52.846021] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.034 [2024-07-14 03:57:52.853887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.034 [2024-07-14 03:57:52.853906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.034 [2024-07-14 03:57:52.853913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.853920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.034 [2024-07-14 03:57:52.853934] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:34.034 [2024-07-14 03:57:52.853944] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:34.034 [2024-07-14 03:57:52.853953] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:34.034 [2024-07-14 03:57:52.853969] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.853978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.853984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.034 [2024-07-14 03:57:52.853995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.034 [2024-07-14 03:57:52.854018] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.034 [2024-07-14 03:57:52.854204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.034 [2024-07-14 03:57:52.854217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.034 [2024-07-14 03:57:52.854224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.034 [2024-07-14 03:57:52.854231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.035 [2024-07-14 03:57:52.854240] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:34.035 [2024-07-14 03:57:52.854253] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:34.035 [2024-07-14 03:57:52.854266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.035 [2024-07-14 03:57:52.854290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.035 [2024-07-14 03:57:52.854311] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.035 [2024-07-14 03:57:52.854480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.035 [2024-07-14 03:57:52.854493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.035 [2024-07-14 03:57:52.854504] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.035 [2024-07-14 03:57:52.854522] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:34.035 [2024-07-14 03:57:52.854537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:34.035 [2024-07-14 03:57:52.854549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.035 [2024-07-14 03:57:52.854574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.035 [2024-07-14 03:57:52.854609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.035 [2024-07-14 03:57:52.854823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.035 [2024-07-14 03:57:52.854837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.035 [2024-07-14 03:57:52.854844] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854851] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.035 [2024-07-14 03:57:52.854873] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:34.035 [2024-07-14 03:57:52.854900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.854919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.035 [2024-07-14 03:57:52.854929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.035 [2024-07-14 03:57:52.854951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.035 [2024-07-14 03:57:52.855197] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.035 [2024-07-14 03:57:52.855211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.035 [2024-07-14 03:57:52.855218] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855225] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.035 [2024-07-14 03:57:52.855234] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:34.035 [2024-07-14 03:57:52.855242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:34.035 [2024-07-14 03:57:52.855255] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:34.035 [2024-07-14 03:57:52.855365] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:34.035 [2024-07-14 03:57:52.855373] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:34.035 [2024-07-14 03:57:52.855385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.035 [2024-07-14 03:57:52.855408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.035 [2024-07-14 03:57:52.855440] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.035 [2024-07-14 03:57:52.855623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.035 [2024-07-14 03:57:52.855641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.035 [2024-07-14 03:57:52.855648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.035 [2024-07-14 03:57:52.855664] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:34.035 [2024-07-14 03:57:52.855682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.035 [2024-07-14 03:57:52.855707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.035 [2024-07-14 03:57:52.855728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.035 [2024-07-14 03:57:52.855884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.035 [2024-07-14 03:57:52.855898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.035 [2024-07-14 03:57:52.855906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.035 [2024-07-14 03:57:52.855923] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:34.035 [2024-07-14 03:57:52.855932] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:34.035 [2024-07-14 03:57:52.855945] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:34.035 [2024-07-14 03:57:52.855959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:34.035 [2024-07-14 03:57:52.855972] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.855986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.035 [2024-07-14 03:57:52.855997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.035 [2024-07-14 03:57:52.856035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.035 [2024-07-14 03:57:52.856399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.035 [2024-07-14 03:57:52.856415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.035 [2024-07-14 03:57:52.856422] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.856428] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=4096, cccid=0 00:25:34.035 [2024-07-14 03:57:52.856451] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f713e0) on tqpair(0x1f065a0): expected_datao=0, payload_size=4096 00:25:34.035 [2024-07-14 03:57:52.856463] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.856471] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.856569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.035 [2024-07-14 03:57:52.856580] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.035 [2024-07-14 03:57:52.856588] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.035 [2024-07-14 03:57:52.856595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.035 [2024-07-14 03:57:52.856611] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:34.036 [2024-07-14 03:57:52.856622] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:34.036 [2024-07-14 03:57:52.856629] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:34.036 [2024-07-14 03:57:52.856637] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:34.036 [2024-07-14 03:57:52.856645] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:34.036 [2024-07-14 03:57:52.856655] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.856674] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.856687] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.856695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.856701] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.856712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.036 [2024-07-14 03:57:52.856734] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.036 [2024-07-14 03:57:52.857014] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.036 [2024-07-14 03:57:52.857031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.036 [2024-07-14 03:57:52.857038] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857045] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f713e0) on tqpair=0x1f065a0 00:25:34.036 [2024-07-14 03:57:52.857057] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857071] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.857081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.036 [2024-07-14 03:57:52.857091] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.857129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.036 [2024-07-14 03:57:52.857138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857151] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.857159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.036 [2024-07-14 03:57:52.857168] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.857189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.036 [2024-07-14 03:57:52.857212] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.857233] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.857246] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.857268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.036 [2024-07-14 03:57:52.857290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f713e0, cid 0, qid 0 00:25:34.036 [2024-07-14 03:57:52.857316] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71540, cid 1, qid 0 00:25:34.036 [2024-07-14 03:57:52.857324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f716a0, cid 2, qid 0 00:25:34.036 [2024-07-14 03:57:52.857332] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.036 [2024-07-14 03:57:52.857339] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71960, cid 4, qid 0 00:25:34.036 [2024-07-14 03:57:52.857597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.036 [2024-07-14 03:57:52.857610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.036 [2024-07-14 03:57:52.857617] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71960) on tqpair=0x1f065a0 00:25:34.036 [2024-07-14 03:57:52.857633] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:34.036 [2024-07-14 03:57:52.857642] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.857656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.857686] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.857698] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857705] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.857712] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.857722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.036 [2024-07-14 03:57:52.857757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71960, cid 4, qid 0 00:25:34.036 [2024-07-14 03:57:52.857989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.036 [2024-07-14 03:57:52.858006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.036 [2024-07-14 03:57:52.858012] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71960) on tqpair=0x1f065a0 00:25:34.036 [2024-07-14 03:57:52.858085] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.858118] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.858132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858140] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.858160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.036 [2024-07-14 03:57:52.858196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71960, cid 4, qid 0 00:25:34.036 [2024-07-14 03:57:52.858432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.036 [2024-07-14 03:57:52.858445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.036 [2024-07-14 03:57:52.858454] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858461] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=4096, cccid=4 00:25:34.036 [2024-07-14 03:57:52.858470] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f71960) on tqpair(0x1f065a0): expected_datao=0, payload_size=4096 00:25:34.036 [2024-07-14 03:57:52.858503] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858512] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.036 [2024-07-14 03:57:52.858622] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.036 [2024-07-14 03:57:52.858629] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858636] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71960) on tqpair=0x1f065a0 00:25:34.036 [2024-07-14 03:57:52.858658] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:34.036 [2024-07-14 03:57:52.858681] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.858699] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:34.036 [2024-07-14 03:57:52.858712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.036 [2024-07-14 03:57:52.858726] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f065a0) 00:25:34.036 [2024-07-14 03:57:52.858737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.037 [2024-07-14 03:57:52.858758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71960, cid 4, qid 0 00:25:34.037 [2024-07-14 03:57:52.858955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.037 [2024-07-14 03:57:52.858971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.037 [2024-07-14 03:57:52.858978] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.858985] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=4096, cccid=4 00:25:34.037 [2024-07-14 03:57:52.858993] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f71960) on tqpair(0x1f065a0): expected_datao=0, payload_size=4096 00:25:34.037 [2024-07-14 03:57:52.859026] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859035] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.037 [2024-07-14 03:57:52.859189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.037 [2024-07-14 03:57:52.859196] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71960) on tqpair=0x1f065a0 00:25:34.037 [2024-07-14 03:57:52.859238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859257] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859274] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859290] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f065a0) 00:25:34.037 [2024-07-14 03:57:52.859300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.037 [2024-07-14 03:57:52.859340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71960, cid 4, qid 0 00:25:34.037 [2024-07-14 03:57:52.859573] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.037 [2024-07-14 03:57:52.859589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.037 [2024-07-14 03:57:52.859596] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859603] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=4096, cccid=4 00:25:34.037 [2024-07-14 03:57:52.859610] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f71960) on tqpair(0x1f065a0): expected_datao=0, payload_size=4096 00:25:34.037 [2024-07-14 03:57:52.859645] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859654] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.037 [2024-07-14 03:57:52.859773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.037 [2024-07-14 03:57:52.859780] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71960) on tqpair=0x1f065a0 00:25:34.037 [2024-07-14 03:57:52.859802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859832] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859843] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859852] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859874] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:34.037 [2024-07-14 03:57:52.859883] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:34.037 [2024-07-14 03:57:52.859892] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:34.037 [2024-07-14 03:57:52.859910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859919] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f065a0) 00:25:34.037 [2024-07-14 03:57:52.859936] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.037 [2024-07-14 03:57:52.859963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.859976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f065a0) 00:25:34.037 [2024-07-14 03:57:52.859985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.037 [2024-07-14 03:57:52.860009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71960, cid 4, qid 0 00:25:34.037 [2024-07-14 03:57:52.860039] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71ac0, cid 5, qid 0 00:25:34.037 [2024-07-14 03:57:52.860222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.037 [2024-07-14 03:57:52.860239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.037 [2024-07-14 03:57:52.860247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71960) on tqpair=0x1f065a0 00:25:34.037 [2024-07-14 03:57:52.860265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.037 [2024-07-14 03:57:52.860274] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.037 [2024-07-14 03:57:52.860281] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71ac0) on tqpair=0x1f065a0 00:25:34.037 [2024-07-14 03:57:52.860305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860336] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f065a0) 00:25:34.037 [2024-07-14 03:57:52.860347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.037 [2024-07-14 03:57:52.860368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71ac0, cid 5, qid 0 00:25:34.037 [2024-07-14 03:57:52.860555] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.037 [2024-07-14 03:57:52.860568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.037 [2024-07-14 03:57:52.860576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860584] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71ac0) on tqpair=0x1f065a0 00:25:34.037 [2024-07-14 03:57:52.860601] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860611] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860617] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f065a0) 00:25:34.037 [2024-07-14 03:57:52.860628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.037 [2024-07-14 03:57:52.860648] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71ac0, cid 5, qid 0 00:25:34.037 [2024-07-14 03:57:52.860791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.037 [2024-07-14 03:57:52.860807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.037 [2024-07-14 03:57:52.860814] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860821] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71ac0) on tqpair=0x1f065a0 00:25:34.037 [2024-07-14 03:57:52.860838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860847] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.037 [2024-07-14 03:57:52.860854] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f065a0) 00:25:34.037 [2024-07-14 03:57:52.860864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.038 [2024-07-14 03:57:52.864904] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71ac0, cid 5, qid 0 00:25:34.038 [2024-07-14 03:57:52.865076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.038 [2024-07-14 03:57:52.865092] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.038 [2024-07-14 03:57:52.865099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71ac0) on tqpair=0x1f065a0 00:25:34.038 [2024-07-14 03:57:52.865130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865140] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f065a0) 00:25:34.038 [2024-07-14 03:57:52.865166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.038 [2024-07-14 03:57:52.865178] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865193] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f065a0) 00:25:34.038 [2024-07-14 03:57:52.865228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.038 [2024-07-14 03:57:52.865241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865248] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f065a0) 00:25:34.038 [2024-07-14 03:57:52.865266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.038 [2024-07-14 03:57:52.865292] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865299] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f065a0) 00:25:34.038 [2024-07-14 03:57:52.865314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.038 [2024-07-14 03:57:52.865338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71ac0, cid 5, qid 0 00:25:34.038 [2024-07-14 03:57:52.865364] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71960, cid 4, qid 0 00:25:34.038 [2024-07-14 03:57:52.865372] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71c20, cid 6, qid 0 00:25:34.038 [2024-07-14 03:57:52.865379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71d80, cid 7, qid 0 00:25:34.038 [2024-07-14 03:57:52.865678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.038 [2024-07-14 03:57:52.865694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.038 [2024-07-14 03:57:52.865701] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865708] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=8192, cccid=5 00:25:34.038 [2024-07-14 03:57:52.865730] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f71ac0) on tqpair(0x1f065a0): expected_datao=0, payload_size=8192 00:25:34.038 [2024-07-14 03:57:52.865852] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865878] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865888] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.038 [2024-07-14 03:57:52.865897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.038 [2024-07-14 03:57:52.865904] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865910] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=512, cccid=4 00:25:34.038 [2024-07-14 03:57:52.865918] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f71960) on tqpair(0x1f065a0): expected_datao=0, payload_size=512 00:25:34.038 [2024-07-14 03:57:52.865928] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865935] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.038 [2024-07-14 03:57:52.865957] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.038 [2024-07-14 03:57:52.865964] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865970] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=512, cccid=6 00:25:34.038 [2024-07-14 03:57:52.865978] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f71c20) on tqpair(0x1f065a0): expected_datao=0, payload_size=512 00:25:34.038 [2024-07-14 03:57:52.865988] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.865995] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.866004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:34.038 [2024-07-14 03:57:52.866012] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:34.038 [2024-07-14 03:57:52.866019] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.866025] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f065a0): datao=0, datal=4096, cccid=7 00:25:34.038 [2024-07-14 03:57:52.866032] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f71d80) on tqpair(0x1f065a0): expected_datao=0, payload_size=4096 00:25:34.038 [2024-07-14 03:57:52.866043] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.866051] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.866062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.038 [2024-07-14 03:57:52.866072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.038 [2024-07-14 03:57:52.866079] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.866086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71ac0) on tqpair=0x1f065a0 00:25:34.038 [2024-07-14 03:57:52.866105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.038 [2024-07-14 03:57:52.866117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.038 [2024-07-14 03:57:52.866124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.866130] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71960) on tqpair=0x1f065a0 00:25:34.038 [2024-07-14 03:57:52.866160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.038 [2024-07-14 03:57:52.866171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.038 [2024-07-14 03:57:52.866177] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.038 [2024-07-14 03:57:52.866184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71c20) on tqpair=0x1f065a0 00:25:34.039 [2024-07-14 03:57:52.866195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.039 [2024-07-14 03:57:52.866205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.039 [2024-07-14 03:57:52.866226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.039 [2024-07-14 03:57:52.866233] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71d80) on tqpair=0x1f065a0 00:25:34.039 ===================================================== 00:25:34.039 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.039 ===================================================== 00:25:34.039 Controller Capabilities/Features 00:25:34.039 ================================ 00:25:34.039 Vendor ID: 8086 00:25:34.039 Subsystem Vendor ID: 8086 00:25:34.039 Serial Number: SPDK00000000000001 00:25:34.039 Model Number: SPDK bdev Controller 00:25:34.039 Firmware Version: 24.01.1 00:25:34.039 Recommended Arb Burst: 6 00:25:34.039 IEEE OUI Identifier: e4 d2 5c 00:25:34.039 Multi-path I/O 00:25:34.039 May have multiple subsystem ports: Yes 00:25:34.039 May have multiple controllers: Yes 00:25:34.039 Associated with SR-IOV VF: No 00:25:34.039 Max Data Transfer Size: 131072 00:25:34.039 Max Number of Namespaces: 32 00:25:34.039 Max Number of I/O Queues: 127 00:25:34.039 NVMe Specification Version (VS): 1.3 00:25:34.039 NVMe Specification Version (Identify): 1.3 00:25:34.039 Maximum Queue Entries: 128 00:25:34.039 Contiguous Queues Required: Yes 00:25:34.039 Arbitration Mechanisms Supported 00:25:34.039 Weighted Round Robin: Not Supported 00:25:34.039 Vendor Specific: Not Supported 00:25:34.039 Reset Timeout: 15000 ms 00:25:34.039 Doorbell Stride: 4 bytes 00:25:34.039 NVM Subsystem Reset: Not Supported 00:25:34.039 Command Sets Supported 00:25:34.039 NVM Command Set: Supported 00:25:34.039 Boot Partition: Not Supported 00:25:34.039 Memory Page Size Minimum: 4096 bytes 00:25:34.039 Memory Page Size Maximum: 4096 bytes 00:25:34.039 Persistent Memory Region: Not Supported 00:25:34.039 Optional Asynchronous Events Supported 00:25:34.039 Namespace Attribute Notices: Supported 00:25:34.039 Firmware Activation Notices: Not Supported 00:25:34.039 ANA Change Notices: Not Supported 00:25:34.039 PLE Aggregate Log Change Notices: Not Supported 00:25:34.039 LBA Status Info Alert Notices: Not Supported 00:25:34.039 EGE Aggregate Log Change Notices: Not Supported 00:25:34.039 Normal NVM Subsystem Shutdown event: Not Supported 00:25:34.039 Zone Descriptor Change Notices: Not Supported 00:25:34.039 Discovery Log Change Notices: Not Supported 00:25:34.039 Controller Attributes 00:25:34.039 128-bit Host Identifier: Supported 00:25:34.039 Non-Operational Permissive Mode: Not Supported 00:25:34.039 NVM Sets: Not Supported 00:25:34.039 Read Recovery Levels: Not Supported 00:25:34.039 Endurance Groups: Not Supported 00:25:34.039 Predictable Latency Mode: Not Supported 00:25:34.039 Traffic Based Keep ALive: Not Supported 00:25:34.039 Namespace Granularity: Not Supported 00:25:34.039 SQ Associations: Not Supported 00:25:34.039 UUID List: Not Supported 00:25:34.039 Multi-Domain Subsystem: Not Supported 00:25:34.039 Fixed Capacity Management: Not Supported 00:25:34.039 Variable Capacity Management: Not Supported 00:25:34.039 Delete Endurance Group: Not Supported 00:25:34.039 Delete NVM Set: Not Supported 00:25:34.039 Extended LBA Formats Supported: Not Supported 00:25:34.039 Flexible Data Placement Supported: Not Supported 00:25:34.039 00:25:34.039 Controller Memory Buffer Support 00:25:34.039 ================================ 00:25:34.039 Supported: No 00:25:34.039 00:25:34.039 Persistent Memory Region Support 00:25:34.039 ================================ 00:25:34.039 Supported: No 00:25:34.039 00:25:34.039 Admin Command Set Attributes 00:25:34.039 ============================ 00:25:34.039 Security Send/Receive: Not Supported 00:25:34.039 Format NVM: Not Supported 00:25:34.039 Firmware Activate/Download: Not Supported 00:25:34.039 Namespace Management: Not Supported 00:25:34.039 Device Self-Test: Not Supported 00:25:34.039 Directives: Not Supported 00:25:34.039 NVMe-MI: Not Supported 00:25:34.039 Virtualization Management: Not Supported 00:25:34.039 Doorbell Buffer Config: Not Supported 00:25:34.039 Get LBA Status Capability: Not Supported 00:25:34.039 Command & Feature Lockdown Capability: Not Supported 00:25:34.039 Abort Command Limit: 4 00:25:34.039 Async Event Request Limit: 4 00:25:34.039 Number of Firmware Slots: N/A 00:25:34.039 Firmware Slot 1 Read-Only: N/A 00:25:34.039 Firmware Activation Without Reset: N/A 00:25:34.039 Multiple Update Detection Support: N/A 00:25:34.039 Firmware Update Granularity: No Information Provided 00:25:34.039 Per-Namespace SMART Log: No 00:25:34.039 Asymmetric Namespace Access Log Page: Not Supported 00:25:34.039 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:34.039 Command Effects Log Page: Supported 00:25:34.039 Get Log Page Extended Data: Supported 00:25:34.039 Telemetry Log Pages: Not Supported 00:25:34.039 Persistent Event Log Pages: Not Supported 00:25:34.039 Supported Log Pages Log Page: May Support 00:25:34.039 Commands Supported & Effects Log Page: Not Supported 00:25:34.039 Feature Identifiers & Effects Log Page:May Support 00:25:34.039 NVMe-MI Commands & Effects Log Page: May Support 00:25:34.039 Data Area 4 for Telemetry Log: Not Supported 00:25:34.039 Error Log Page Entries Supported: 128 00:25:34.039 Keep Alive: Supported 00:25:34.039 Keep Alive Granularity: 10000 ms 00:25:34.039 00:25:34.039 NVM Command Set Attributes 00:25:34.039 ========================== 00:25:34.039 Submission Queue Entry Size 00:25:34.039 Max: 64 00:25:34.039 Min: 64 00:25:34.039 Completion Queue Entry Size 00:25:34.039 Max: 16 00:25:34.039 Min: 16 00:25:34.039 Number of Namespaces: 32 00:25:34.039 Compare Command: Supported 00:25:34.039 Write Uncorrectable Command: Not Supported 00:25:34.039 Dataset Management Command: Supported 00:25:34.039 Write Zeroes Command: Supported 00:25:34.039 Set Features Save Field: Not Supported 00:25:34.039 Reservations: Supported 00:25:34.039 Timestamp: Not Supported 00:25:34.039 Copy: Supported 00:25:34.039 Volatile Write Cache: Present 00:25:34.039 Atomic Write Unit (Normal): 1 00:25:34.039 Atomic Write Unit (PFail): 1 00:25:34.039 Atomic Compare & Write Unit: 1 00:25:34.039 Fused Compare & Write: Supported 00:25:34.039 Scatter-Gather List 00:25:34.039 SGL Command Set: Supported 00:25:34.039 SGL Keyed: Supported 00:25:34.039 SGL Bit Bucket Descriptor: Not Supported 00:25:34.039 SGL Metadata Pointer: Not Supported 00:25:34.039 Oversized SGL: Not Supported 00:25:34.039 SGL Metadata Address: Not Supported 00:25:34.039 SGL Offset: Supported 00:25:34.039 Transport SGL Data Block: Not Supported 00:25:34.039 Replay Protected Memory Block: Not Supported 00:25:34.039 00:25:34.039 Firmware Slot Information 00:25:34.039 ========================= 00:25:34.039 Active slot: 1 00:25:34.039 Slot 1 Firmware Revision: 24.01.1 00:25:34.040 00:25:34.040 00:25:34.040 Commands Supported and Effects 00:25:34.040 ============================== 00:25:34.040 Admin Commands 00:25:34.040 -------------- 00:25:34.040 Get Log Page (02h): Supported 00:25:34.040 Identify (06h): Supported 00:25:34.040 Abort (08h): Supported 00:25:34.040 Set Features (09h): Supported 00:25:34.040 Get Features (0Ah): Supported 00:25:34.040 Asynchronous Event Request (0Ch): Supported 00:25:34.040 Keep Alive (18h): Supported 00:25:34.040 I/O Commands 00:25:34.040 ------------ 00:25:34.040 Flush (00h): Supported LBA-Change 00:25:34.040 Write (01h): Supported LBA-Change 00:25:34.040 Read (02h): Supported 00:25:34.040 Compare (05h): Supported 00:25:34.040 Write Zeroes (08h): Supported LBA-Change 00:25:34.040 Dataset Management (09h): Supported LBA-Change 00:25:34.040 Copy (19h): Supported LBA-Change 00:25:34.040 Unknown (79h): Supported LBA-Change 00:25:34.040 Unknown (7Ah): Supported 00:25:34.040 00:25:34.040 Error Log 00:25:34.040 ========= 00:25:34.040 00:25:34.040 Arbitration 00:25:34.040 =========== 00:25:34.040 Arbitration Burst: 1 00:25:34.040 00:25:34.040 Power Management 00:25:34.040 ================ 00:25:34.040 Number of Power States: 1 00:25:34.040 Current Power State: Power State #0 00:25:34.040 Power State #0: 00:25:34.040 Max Power: 0.00 W 00:25:34.040 Non-Operational State: Operational 00:25:34.040 Entry Latency: Not Reported 00:25:34.040 Exit Latency: Not Reported 00:25:34.040 Relative Read Throughput: 0 00:25:34.040 Relative Read Latency: 0 00:25:34.040 Relative Write Throughput: 0 00:25:34.040 Relative Write Latency: 0 00:25:34.040 Idle Power: Not Reported 00:25:34.040 Active Power: Not Reported 00:25:34.040 Non-Operational Permissive Mode: Not Supported 00:25:34.040 00:25:34.040 Health Information 00:25:34.040 ================== 00:25:34.040 Critical Warnings: 00:25:34.040 Available Spare Space: OK 00:25:34.040 Temperature: OK 00:25:34.040 Device Reliability: OK 00:25:34.040 Read Only: No 00:25:34.040 Volatile Memory Backup: OK 00:25:34.040 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:34.040 Temperature Threshold: [2024-07-14 03:57:52.866352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.866364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.866371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f065a0) 00:25:34.040 [2024-07-14 03:57:52.866381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.040 [2024-07-14 03:57:52.866404] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71d80, cid 7, qid 0 00:25:34.040 [2024-07-14 03:57:52.866586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.040 [2024-07-14 03:57:52.866599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.040 [2024-07-14 03:57:52.866606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.866613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71d80) on tqpair=0x1f065a0 00:25:34.040 [2024-07-14 03:57:52.866656] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:34.040 [2024-07-14 03:57:52.866678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.040 [2024-07-14 03:57:52.866690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.040 [2024-07-14 03:57:52.866700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.040 [2024-07-14 03:57:52.866710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.040 [2024-07-14 03:57:52.866723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.866731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.866737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.040 [2024-07-14 03:57:52.866748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.040 [2024-07-14 03:57:52.866770] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.040 [2024-07-14 03:57:52.866967] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.040 [2024-07-14 03:57:52.866983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.040 [2024-07-14 03:57:52.866990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.866997] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.040 [2024-07-14 03:57:52.867009] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867017] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.040 [2024-07-14 03:57:52.867034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.040 [2024-07-14 03:57:52.867060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.040 [2024-07-14 03:57:52.867230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.040 [2024-07-14 03:57:52.867245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.040 [2024-07-14 03:57:52.867252] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.040 [2024-07-14 03:57:52.867268] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:34.040 [2024-07-14 03:57:52.867276] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:34.040 [2024-07-14 03:57:52.867292] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867300] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867307] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.040 [2024-07-14 03:57:52.867317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.040 [2024-07-14 03:57:52.867338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.040 [2024-07-14 03:57:52.867512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.040 [2024-07-14 03:57:52.867527] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.040 [2024-07-14 03:57:52.867534] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867541] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.040 [2024-07-14 03:57:52.867562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867579] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.040 [2024-07-14 03:57:52.867590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.040 [2024-07-14 03:57:52.867610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.040 [2024-07-14 03:57:52.867757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.040 [2024-07-14 03:57:52.867769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.040 [2024-07-14 03:57:52.867776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.040 [2024-07-14 03:57:52.867799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.040 [2024-07-14 03:57:52.867808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.867814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.041 [2024-07-14 03:57:52.867825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.041 [2024-07-14 03:57:52.867844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.041 [2024-07-14 03:57:52.867996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.041 [2024-07-14 03:57:52.868009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.041 [2024-07-14 03:57:52.868016] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868023] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.041 [2024-07-14 03:57:52.868040] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.041 [2024-07-14 03:57:52.868066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.041 [2024-07-14 03:57:52.868086] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.041 [2024-07-14 03:57:52.868220] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.041 [2024-07-14 03:57:52.868236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.041 [2024-07-14 03:57:52.868243] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868250] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.041 [2024-07-14 03:57:52.868266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.041 [2024-07-14 03:57:52.868292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.041 [2024-07-14 03:57:52.868312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.041 [2024-07-14 03:57:52.868455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.041 [2024-07-14 03:57:52.868470] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.041 [2024-07-14 03:57:52.868477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868484] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.041 [2024-07-14 03:57:52.868501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868514] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.041 [2024-07-14 03:57:52.868531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.041 [2024-07-14 03:57:52.868552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.041 [2024-07-14 03:57:52.868694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.041 [2024-07-14 03:57:52.868708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.041 [2024-07-14 03:57:52.868715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.041 [2024-07-14 03:57:52.868739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.868755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.041 [2024-07-14 03:57:52.868765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.041 [2024-07-14 03:57:52.868785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.041 [2024-07-14 03:57:52.872895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.041 [2024-07-14 03:57:52.872912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.041 [2024-07-14 03:57:52.872919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.872926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.041 [2024-07-14 03:57:52.872944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.872953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.872960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f065a0) 00:25:34.041 [2024-07-14 03:57:52.872970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.041 [2024-07-14 03:57:52.872991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f71800, cid 3, qid 0 00:25:34.041 [2024-07-14 03:57:52.873172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:34.041 [2024-07-14 03:57:52.873185] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:34.041 [2024-07-14 03:57:52.873192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:34.041 [2024-07-14 03:57:52.873198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f71800) on tqpair=0x1f065a0 00:25:34.041 [2024-07-14 03:57:52.873212] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:34.041 0 Kelvin (-273 Celsius) 00:25:34.041 Available Spare: 0% 00:25:34.041 Available Spare Threshold: 0% 00:25:34.041 Life Percentage Used: 0% 00:25:34.041 Data Units Read: 0 00:25:34.041 Data Units Written: 0 00:25:34.041 Host Read Commands: 0 00:25:34.041 Host Write Commands: 0 00:25:34.041 Controller Busy Time: 0 minutes 00:25:34.041 Power Cycles: 0 00:25:34.041 Power On Hours: 0 hours 00:25:34.041 Unsafe Shutdowns: 0 00:25:34.041 Unrecoverable Media Errors: 0 00:25:34.041 Lifetime Error Log Entries: 0 00:25:34.041 Warning Temperature Time: 0 minutes 00:25:34.041 Critical Temperature Time: 0 minutes 00:25:34.041 00:25:34.041 Number of Queues 00:25:34.041 ================ 00:25:34.041 Number of I/O Submission Queues: 127 00:25:34.041 Number of I/O Completion Queues: 127 00:25:34.041 00:25:34.041 Active Namespaces 00:25:34.041 ================= 00:25:34.041 Namespace ID:1 00:25:34.041 Error Recovery Timeout: Unlimited 00:25:34.041 Command Set Identifier: NVM (00h) 00:25:34.041 Deallocate: Supported 00:25:34.041 Deallocated/Unwritten Error: Not Supported 00:25:34.041 Deallocated Read Value: Unknown 00:25:34.041 Deallocate in Write Zeroes: Not Supported 00:25:34.041 Deallocated Guard Field: 0xFFFF 00:25:34.041 Flush: Supported 00:25:34.041 Reservation: Supported 00:25:34.041 Namespace Sharing Capabilities: Multiple Controllers 00:25:34.041 Size (in LBAs): 131072 (0GiB) 00:25:34.041 Capacity (in LBAs): 131072 (0GiB) 00:25:34.041 Utilization (in LBAs): 131072 (0GiB) 00:25:34.041 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:34.041 EUI64: ABCDEF0123456789 00:25:34.041 UUID: 70e3c19e-8279-424a-be50-34051b0108d6 00:25:34.041 Thin Provisioning: Not Supported 00:25:34.041 Per-NS Atomic Units: Yes 00:25:34.041 Atomic Boundary Size (Normal): 0 00:25:34.041 Atomic Boundary Size (PFail): 0 00:25:34.041 Atomic Boundary Offset: 0 00:25:34.041 Maximum Single Source Range Length: 65535 00:25:34.041 Maximum Copy Length: 65535 00:25:34.041 Maximum Source Range Count: 1 00:25:34.041 NGUID/EUI64 Never Reused: No 00:25:34.041 Namespace Write Protected: No 00:25:34.041 Number of LBA Formats: 1 00:25:34.041 Current LBA Format: LBA Format #00 00:25:34.041 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:34.041 00:25:34.041 03:57:52 -- host/identify.sh@51 -- # sync 00:25:34.042 03:57:52 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.042 03:57:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.042 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:25:34.042 03:57:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.042 03:57:52 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:34.042 03:57:52 -- host/identify.sh@56 -- # nvmftestfini 00:25:34.042 03:57:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:34.042 03:57:52 -- nvmf/common.sh@116 -- # sync 00:25:34.042 03:57:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:34.042 03:57:52 -- nvmf/common.sh@119 -- # set +e 00:25:34.042 03:57:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:34.042 03:57:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:34.042 rmmod nvme_tcp 00:25:34.042 rmmod nvme_fabrics 00:25:34.042 rmmod nvme_keyring 00:25:34.042 03:57:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:34.042 03:57:52 -- nvmf/common.sh@123 -- # set -e 00:25:34.042 03:57:52 -- nvmf/common.sh@124 -- # return 0 00:25:34.042 03:57:52 -- nvmf/common.sh@477 -- # '[' -n 2466119 ']' 00:25:34.042 03:57:52 -- nvmf/common.sh@478 -- # killprocess 2466119 00:25:34.042 03:57:52 -- common/autotest_common.sh@926 -- # '[' -z 2466119 ']' 00:25:34.042 03:57:52 -- common/autotest_common.sh@930 -- # kill -0 2466119 00:25:34.042 03:57:52 -- common/autotest_common.sh@931 -- # uname 00:25:34.042 03:57:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:34.042 03:57:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2466119 00:25:34.300 03:57:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:34.300 03:57:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:34.300 03:57:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2466119' 00:25:34.300 killing process with pid 2466119 00:25:34.300 03:57:52 -- common/autotest_common.sh@945 -- # kill 2466119 00:25:34.300 [2024-07-14 03:57:52.982712] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:34.300 03:57:52 -- common/autotest_common.sh@950 -- # wait 2466119 00:25:34.300 03:57:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:34.300 03:57:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:34.300 03:57:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:34.300 03:57:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.300 03:57:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:34.300 03:57:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.300 03:57:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.300 03:57:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.832 03:57:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:36.832 00:25:36.832 real 0m5.873s 00:25:36.832 user 0m6.983s 00:25:36.832 sys 0m1.826s 00:25:36.832 03:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.832 03:57:55 -- common/autotest_common.sh@10 -- # set +x 00:25:36.832 ************************************ 00:25:36.832 END TEST nvmf_identify 00:25:36.832 ************************************ 00:25:36.832 03:57:55 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:36.832 03:57:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:36.832 03:57:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:36.832 03:57:55 -- common/autotest_common.sh@10 -- # set +x 00:25:36.832 ************************************ 00:25:36.832 START TEST nvmf_perf 00:25:36.832 ************************************ 00:25:36.832 03:57:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:36.832 * Looking for test storage... 00:25:36.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.832 03:57:55 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.832 03:57:55 -- nvmf/common.sh@7 -- # uname -s 00:25:36.832 03:57:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.832 03:57:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.832 03:57:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.832 03:57:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.832 03:57:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.832 03:57:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.832 03:57:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.832 03:57:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.832 03:57:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.832 03:57:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.832 03:57:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.832 03:57:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.832 03:57:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.832 03:57:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.832 03:57:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.832 03:57:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.832 03:57:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.832 03:57:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.832 03:57:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.832 03:57:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.832 03:57:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.832 03:57:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.832 03:57:55 -- paths/export.sh@5 -- # export PATH 00:25:36.832 03:57:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.832 03:57:55 -- nvmf/common.sh@46 -- # : 0 00:25:36.832 03:57:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:36.832 03:57:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:36.832 03:57:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:36.832 03:57:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.832 03:57:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.832 03:57:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:36.832 03:57:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:36.832 03:57:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:36.832 03:57:55 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:36.832 03:57:55 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:36.832 03:57:55 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:36.832 03:57:55 -- host/perf.sh@17 -- # nvmftestinit 00:25:36.832 03:57:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:36.832 03:57:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.832 03:57:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:36.832 03:57:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:36.832 03:57:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:36.832 03:57:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.832 03:57:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.832 03:57:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.832 03:57:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:36.832 03:57:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:36.832 03:57:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:36.832 03:57:55 -- common/autotest_common.sh@10 -- # set +x 00:25:38.734 03:57:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:38.734 03:57:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:38.734 03:57:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:38.734 03:57:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:38.734 03:57:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:38.734 03:57:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:38.734 03:57:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:38.734 03:57:57 -- nvmf/common.sh@294 -- # net_devs=() 00:25:38.734 03:57:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:38.734 03:57:57 -- nvmf/common.sh@295 -- # e810=() 00:25:38.734 03:57:57 -- nvmf/common.sh@295 -- # local -ga e810 00:25:38.734 03:57:57 -- nvmf/common.sh@296 -- # x722=() 00:25:38.734 03:57:57 -- nvmf/common.sh@296 -- # local -ga x722 00:25:38.734 03:57:57 -- nvmf/common.sh@297 -- # mlx=() 00:25:38.734 03:57:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:38.734 03:57:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.734 03:57:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:38.734 03:57:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:38.734 03:57:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:38.734 03:57:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.734 03:57:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:38.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:38.734 03:57:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.734 03:57:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:38.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:38.734 03:57:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:38.734 03:57:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.734 03:57:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.734 03:57:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.734 03:57:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.734 03:57:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:38.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:38.734 03:57:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.734 03:57:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.734 03:57:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.734 03:57:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.734 03:57:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.734 03:57:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:38.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:38.734 03:57:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.734 03:57:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:38.734 03:57:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:38.734 03:57:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:38.734 03:57:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.734 03:57:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.734 03:57:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.734 03:57:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:38.734 03:57:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.734 03:57:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.734 03:57:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:38.734 03:57:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.734 03:57:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.734 03:57:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:38.734 03:57:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:38.734 03:57:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.734 03:57:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.734 03:57:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.734 03:57:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.734 03:57:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:38.734 03:57:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.734 03:57:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.734 03:57:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.734 03:57:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:38.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:25:38.734 00:25:38.734 --- 10.0.0.2 ping statistics --- 00:25:38.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.734 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:38.734 03:57:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:38.734 00:25:38.734 --- 10.0.0.1 ping statistics --- 00:25:38.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.734 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:38.734 03:57:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.734 03:57:57 -- nvmf/common.sh@410 -- # return 0 00:25:38.734 03:57:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:38.734 03:57:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.734 03:57:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:38.734 03:57:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.734 03:57:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:38.734 03:57:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:38.734 03:57:57 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:38.734 03:57:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:38.734 03:57:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:38.734 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:25:38.734 03:57:57 -- nvmf/common.sh@469 -- # nvmfpid=2468320 00:25:38.734 03:57:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:38.734 03:57:57 -- nvmf/common.sh@470 -- # waitforlisten 2468320 00:25:38.734 03:57:57 -- common/autotest_common.sh@819 -- # '[' -z 2468320 ']' 00:25:38.734 03:57:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.734 03:57:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:38.734 03:57:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.734 03:57:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:38.734 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:25:38.734 [2024-07-14 03:57:57.558876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:38.734 [2024-07-14 03:57:57.558968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.734 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.734 [2024-07-14 03:57:57.630069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.991 [2024-07-14 03:57:57.719691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.991 [2024-07-14 03:57:57.719880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.991 [2024-07-14 03:57:57.719902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.991 [2024-07-14 03:57:57.719918] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.991 [2024-07-14 03:57:57.719990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.991 [2024-07-14 03:57:57.720045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.991 [2024-07-14 03:57:57.720162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.991 [2024-07-14 03:57:57.720164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.555 03:57:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:39.555 03:57:58 -- common/autotest_common.sh@852 -- # return 0 00:25:39.555 03:57:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:39.555 03:57:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:39.555 03:57:58 -- common/autotest_common.sh@10 -- # set +x 00:25:39.811 03:57:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.811 03:57:58 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:39.811 03:57:58 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:43.089 03:58:01 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:43.089 03:58:01 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:43.089 03:58:01 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:43.089 03:58:01 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:43.347 03:58:02 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:43.347 03:58:02 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:43.347 03:58:02 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:43.347 03:58:02 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:43.347 03:58:02 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:43.606 [2024-07-14 03:58:02.316222] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.606 03:58:02 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.866 03:58:02 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:43.866 03:58:02 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:43.866 03:58:02 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:43.866 03:58:02 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:44.124 03:58:03 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.383 [2024-07-14 03:58:03.275789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.383 03:58:03 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:44.639 03:58:03 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:44.639 03:58:03 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:44.639 03:58:03 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:44.639 03:58:03 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:46.047 Initializing NVMe Controllers 00:25:46.047 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:46.047 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:46.047 Initialization complete. Launching workers. 00:25:46.047 ======================================================== 00:25:46.047 Latency(us) 00:25:46.047 Device Information : IOPS MiB/s Average min max 00:25:46.047 PCIE (0000:88:00.0) NSID 1 from core 0: 86183.56 336.65 370.89 22.15 7256.48 00:25:46.047 ======================================================== 00:25:46.047 Total : 86183.56 336.65 370.89 22.15 7256.48 00:25:46.047 00:25:46.047 03:58:04 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:46.047 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.419 Initializing NVMe Controllers 00:25:47.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:47.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:47.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:47.419 Initialization complete. Launching workers. 00:25:47.419 ======================================================== 00:25:47.419 Latency(us) 00:25:47.419 Device Information : IOPS MiB/s Average min max 00:25:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 57.00 0.22 18063.41 211.32 45660.62 00:25:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.00 0.23 17430.77 7940.92 47926.77 00:25:47.419 ======================================================== 00:25:47.419 Total : 116.00 0.45 17741.64 211.32 47926.77 00:25:47.419 00:25:47.419 03:58:05 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:47.419 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.793 Initializing NVMe Controllers 00:25:48.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:48.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:48.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:48.793 Initialization complete. Launching workers. 00:25:48.793 ======================================================== 00:25:48.793 Latency(us) 00:25:48.793 Device Information : IOPS MiB/s Average min max 00:25:48.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8233.02 32.16 3889.90 656.19 10346.36 00:25:48.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3965.23 15.49 8126.35 5083.05 16510.97 00:25:48.793 ======================================================== 00:25:48.793 Total : 12198.25 47.65 5267.02 656.19 16510.97 00:25:48.793 00:25:48.793 03:58:07 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:48.793 03:58:07 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:48.793 03:58:07 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:48.793 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.322 Initializing NVMe Controllers 00:25:51.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.322 Controller IO queue size 128, less than required. 00:25:51.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.322 Controller IO queue size 128, less than required. 00:25:51.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:51.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:51.322 Initialization complete. Launching workers. 00:25:51.322 ======================================================== 00:25:51.322 Latency(us) 00:25:51.322 Device Information : IOPS MiB/s Average min max 00:25:51.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 876.77 219.19 150953.75 83862.49 194535.77 00:25:51.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.34 151.59 221859.58 79752.65 332278.30 00:25:51.322 ======================================================== 00:25:51.322 Total : 1483.11 370.78 179942.21 79752.65 332278.30 00:25:51.322 00:25:51.322 03:58:09 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:51.322 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.322 No valid NVMe controllers or AIO or URING devices found 00:25:51.322 Initializing NVMe Controllers 00:25:51.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:51.322 Controller IO queue size 128, less than required. 00:25:51.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.322 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:51.322 Controller IO queue size 128, less than required. 00:25:51.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:51.322 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:51.322 WARNING: Some requested NVMe devices were skipped 00:25:51.322 03:58:09 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:51.322 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.850 Initializing NVMe Controllers 00:25:53.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:53.850 Controller IO queue size 128, less than required. 00:25:53.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:53.850 Controller IO queue size 128, less than required. 00:25:53.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:53.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:53.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:53.850 Initialization complete. Launching workers. 00:25:53.850 00:25:53.850 ==================== 00:25:53.851 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:53.851 TCP transport: 00:25:53.851 polls: 38026 00:25:53.851 idle_polls: 19201 00:25:53.851 sock_completions: 18825 00:25:53.851 nvme_completions: 1929 00:25:53.851 submitted_requests: 3081 00:25:53.851 queued_requests: 1 00:25:53.851 00:25:53.851 ==================== 00:25:53.851 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:53.851 TCP transport: 00:25:53.851 polls: 38580 00:25:53.851 idle_polls: 10562 00:25:53.851 sock_completions: 28018 00:25:53.851 nvme_completions: 3422 00:25:53.851 submitted_requests: 5252 00:25:53.851 queued_requests: 1 00:25:53.851 ======================================================== 00:25:53.851 Latency(us) 00:25:53.851 Device Information : IOPS MiB/s Average min max 00:25:53.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 545.08 136.27 248714.98 97564.15 382512.06 00:25:53.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 917.45 229.36 141396.00 55632.89 184582.83 00:25:53.851 ======================================================== 00:25:53.851 Total : 1462.53 365.63 181393.38 55632.89 382512.06 00:25:53.851 00:25:53.851 03:58:12 -- host/perf.sh@66 -- # sync 00:25:53.851 03:58:12 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.851 03:58:12 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:53.851 03:58:12 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:25:53.851 03:58:12 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:57.129 03:58:15 -- host/perf.sh@72 -- # ls_guid=d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1 00:25:57.129 03:58:15 -- host/perf.sh@73 -- # get_lvs_free_mb d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1 00:25:57.129 03:58:15 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1 00:25:57.129 03:58:15 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:57.129 03:58:15 -- common/autotest_common.sh@1345 -- # local fc 00:25:57.129 03:58:15 -- common/autotest_common.sh@1346 -- # local cs 00:25:57.129 03:58:15 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:57.387 03:58:16 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:57.387 { 00:25:57.387 "uuid": "d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1", 00:25:57.387 "name": "lvs_0", 00:25:57.387 "base_bdev": "Nvme0n1", 00:25:57.387 "total_data_clusters": 238234, 00:25:57.387 "free_clusters": 238234, 00:25:57.387 "block_size": 512, 00:25:57.387 "cluster_size": 4194304 00:25:57.387 } 00:25:57.387 ]' 00:25:57.387 03:58:16 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1") .free_clusters' 00:25:57.387 03:58:16 -- common/autotest_common.sh@1348 -- # fc=238234 00:25:57.387 03:58:16 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1") .cluster_size' 00:25:57.387 03:58:16 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:57.387 03:58:16 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:25:57.387 03:58:16 -- common/autotest_common.sh@1353 -- # echo 952936 00:25:57.387 952936 00:25:57.387 03:58:16 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:57.387 03:58:16 -- host/perf.sh@78 -- # free_mb=20480 00:25:57.387 03:58:16 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1 lbd_0 20480 00:25:57.951 03:58:16 -- host/perf.sh@80 -- # lb_guid=8adc00b6-a018-4fb9-a7e5-442a398e2f41 00:25:57.951 03:58:16 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8adc00b6-a018-4fb9-a7e5-442a398e2f41 lvs_n_0 00:25:58.885 03:58:17 -- host/perf.sh@83 -- # ls_nested_guid=cff5c88d-6e0a-4455-9366-79ebe8931664 00:25:58.885 03:58:17 -- host/perf.sh@84 -- # get_lvs_free_mb cff5c88d-6e0a-4455-9366-79ebe8931664 00:25:58.885 03:58:17 -- common/autotest_common.sh@1343 -- # local lvs_uuid=cff5c88d-6e0a-4455-9366-79ebe8931664 00:25:58.885 03:58:17 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:58.885 03:58:17 -- common/autotest_common.sh@1345 -- # local fc 00:25:58.885 03:58:17 -- common/autotest_common.sh@1346 -- # local cs 00:25:58.885 03:58:17 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:58.885 03:58:17 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:58.885 { 00:25:58.885 "uuid": "d89fc4d3-71c0-4ae8-9ddd-a16c56a1efa1", 00:25:58.885 "name": "lvs_0", 00:25:58.885 "base_bdev": "Nvme0n1", 00:25:58.885 "total_data_clusters": 238234, 00:25:58.885 "free_clusters": 233114, 00:25:58.885 "block_size": 512, 00:25:58.885 "cluster_size": 4194304 00:25:58.885 }, 00:25:58.885 { 00:25:58.885 "uuid": "cff5c88d-6e0a-4455-9366-79ebe8931664", 00:25:58.885 "name": "lvs_n_0", 00:25:58.885 "base_bdev": "8adc00b6-a018-4fb9-a7e5-442a398e2f41", 00:25:58.885 "total_data_clusters": 5114, 00:25:58.885 "free_clusters": 5114, 00:25:58.885 "block_size": 512, 00:25:58.885 "cluster_size": 4194304 00:25:58.885 } 00:25:58.885 ]' 00:25:58.885 03:58:17 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="cff5c88d-6e0a-4455-9366-79ebe8931664") .free_clusters' 00:25:59.142 03:58:17 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:59.142 03:58:17 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="cff5c88d-6e0a-4455-9366-79ebe8931664") .cluster_size' 00:25:59.142 03:58:17 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:59.142 03:58:17 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:59.142 03:58:17 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:59.142 20456 00:25:59.142 03:58:17 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:59.142 03:58:17 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cff5c88d-6e0a-4455-9366-79ebe8931664 lbd_nest_0 20456 00:25:59.400 03:58:18 -- host/perf.sh@88 -- # lb_nested_guid=86fa1460-3aff-4622-a930-8f85dc0a4263 00:25:59.400 03:58:18 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.657 03:58:18 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:59.657 03:58:18 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 86fa1460-3aff-4622-a930-8f85dc0a4263 00:25:59.657 03:58:18 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.915 03:58:18 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:59.915 03:58:18 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:59.915 03:58:18 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:59.915 03:58:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:59.915 03:58:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.915 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.105 Initializing NVMe Controllers 00:26:12.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.105 Initialization complete. Launching workers. 00:26:12.105 ======================================================== 00:26:12.105 Latency(us) 00:26:12.105 Device Information : IOPS MiB/s Average min max 00:26:12.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.39 0.02 21597.89 256.37 46081.94 00:26:12.105 ======================================================== 00:26:12.105 Total : 46.39 0.02 21597.89 256.37 46081.94 00:26:12.105 00:26:12.105 03:58:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:12.105 03:58:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.105 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.112 Initializing NVMe Controllers 00:26:22.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.112 Initialization complete. Launching workers. 00:26:22.112 ======================================================== 00:26:22.112 Latency(us) 00:26:22.112 Device Information : IOPS MiB/s Average min max 00:26:22.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.10 9.64 12976.73 4997.57 47899.44 00:26:22.112 ======================================================== 00:26:22.112 Total : 77.10 9.64 12976.73 4997.57 47899.44 00:26:22.112 00:26:22.112 03:58:39 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:22.112 03:58:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:22.112 03:58:39 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:22.112 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.077 Initializing NVMe Controllers 00:26:32.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:32.077 Initialization complete. Launching workers. 00:26:32.077 ======================================================== 00:26:32.077 Latency(us) 00:26:32.077 Device Information : IOPS MiB/s Average min max 00:26:32.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7070.60 3.45 4525.64 321.68 12135.75 00:26:32.077 ======================================================== 00:26:32.077 Total : 7070.60 3.45 4525.64 321.68 12135.75 00:26:32.077 00:26:32.077 03:58:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:32.077 03:58:49 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.077 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.044 Initializing NVMe Controllers 00:26:42.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:42.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:42.044 Initialization complete. Launching workers. 00:26:42.044 ======================================================== 00:26:42.044 Latency(us) 00:26:42.044 Device Information : IOPS MiB/s Average min max 00:26:42.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1716.60 214.57 18644.61 1676.57 40839.66 00:26:42.044 ======================================================== 00:26:42.044 Total : 1716.60 214.57 18644.61 1676.57 40839.66 00:26:42.044 00:26:42.044 03:59:00 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:42.044 03:59:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:42.044 03:59:00 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.044 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.005 Initializing NVMe Controllers 00:26:52.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.005 Controller IO queue size 128, less than required. 00:26:52.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.005 Initialization complete. Launching workers. 00:26:52.005 ======================================================== 00:26:52.005 Latency(us) 00:26:52.005 Device Information : IOPS MiB/s Average min max 00:26:52.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12013.86 5.87 10656.42 1795.68 23101.04 00:26:52.005 ======================================================== 00:26:52.005 Total : 12013.86 5.87 10656.42 1795.68 23101.04 00:26:52.005 00:26:52.005 03:59:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:52.005 03:59:10 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:52.005 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.006 Initializing NVMe Controllers 00:27:02.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.006 Controller IO queue size 128, less than required. 00:27:02.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.006 Initialization complete. Launching workers. 00:27:02.006 ======================================================== 00:27:02.006 Latency(us) 00:27:02.006 Device Information : IOPS MiB/s Average min max 00:27:02.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1194.54 149.32 107313.35 24000.18 201990.95 00:27:02.006 ======================================================== 00:27:02.006 Total : 1194.54 149.32 107313.35 24000.18 201990.95 00:27:02.006 00:27:02.006 03:59:20 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.006 03:59:20 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86fa1460-3aff-4622-a930-8f85dc0a4263 00:27:02.939 03:59:21 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:02.939 03:59:21 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8adc00b6-a018-4fb9-a7e5-442a398e2f41 00:27:03.508 03:59:22 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:03.508 03:59:22 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:03.508 03:59:22 -- host/perf.sh@114 -- # nvmftestfini 00:27:03.508 03:59:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.508 03:59:22 -- nvmf/common.sh@116 -- # sync 00:27:03.508 03:59:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:03.508 03:59:22 -- nvmf/common.sh@119 -- # set +e 00:27:03.508 03:59:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.508 03:59:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:03.508 rmmod nvme_tcp 00:27:03.508 rmmod nvme_fabrics 00:27:03.766 rmmod nvme_keyring 00:27:03.766 03:59:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.766 03:59:22 -- nvmf/common.sh@123 -- # set -e 00:27:03.766 03:59:22 -- nvmf/common.sh@124 -- # return 0 00:27:03.766 03:59:22 -- nvmf/common.sh@477 -- # '[' -n 2468320 ']' 00:27:03.766 03:59:22 -- nvmf/common.sh@478 -- # killprocess 2468320 00:27:03.766 03:59:22 -- common/autotest_common.sh@926 -- # '[' -z 2468320 ']' 00:27:03.766 03:59:22 -- common/autotest_common.sh@930 -- # kill -0 2468320 00:27:03.766 03:59:22 -- common/autotest_common.sh@931 -- # uname 00:27:03.766 03:59:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:03.766 03:59:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2468320 00:27:03.766 03:59:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:03.766 03:59:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:03.766 03:59:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2468320' 00:27:03.766 killing process with pid 2468320 00:27:03.766 03:59:22 -- common/autotest_common.sh@945 -- # kill 2468320 00:27:03.766 03:59:22 -- common/autotest_common.sh@950 -- # wait 2468320 00:27:05.664 03:59:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:05.664 03:59:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:05.664 03:59:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:05.664 03:59:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.664 03:59:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:05.664 03:59:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.664 03:59:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.664 03:59:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.568 03:59:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:07.568 00:27:07.568 real 1m30.890s 00:27:07.568 user 5m35.299s 00:27:07.568 sys 0m15.040s 00:27:07.568 03:59:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.568 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:27:07.568 ************************************ 00:27:07.568 END TEST nvmf_perf 00:27:07.568 ************************************ 00:27:07.568 03:59:26 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:07.568 03:59:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:07.568 03:59:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:07.568 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:27:07.568 ************************************ 00:27:07.568 START TEST nvmf_fio_host 00:27:07.568 ************************************ 00:27:07.568 03:59:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:07.568 * Looking for test storage... 00:27:07.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.568 03:59:26 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.568 03:59:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.568 03:59:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.568 03:59:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.568 03:59:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.568 03:59:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.568 03:59:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.568 03:59:26 -- paths/export.sh@5 -- # export PATH 00:27:07.568 03:59:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.568 03:59:26 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.568 03:59:26 -- nvmf/common.sh@7 -- # uname -s 00:27:07.568 03:59:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.568 03:59:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.568 03:59:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.568 03:59:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.568 03:59:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.568 03:59:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.568 03:59:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.568 03:59:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.568 03:59:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.568 03:59:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.568 03:59:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.568 03:59:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.568 03:59:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.568 03:59:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.568 03:59:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.568 03:59:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.568 03:59:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.568 03:59:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.568 03:59:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.568 03:59:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.569 03:59:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.569 03:59:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.569 03:59:26 -- paths/export.sh@5 -- # export PATH 00:27:07.569 03:59:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.569 03:59:26 -- nvmf/common.sh@46 -- # : 0 00:27:07.569 03:59:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:07.569 03:59:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:07.569 03:59:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:07.569 03:59:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.569 03:59:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.569 03:59:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:07.569 03:59:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:07.569 03:59:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:07.569 03:59:26 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:07.569 03:59:26 -- host/fio.sh@14 -- # nvmftestinit 00:27:07.569 03:59:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:07.569 03:59:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.569 03:59:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:07.569 03:59:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:07.569 03:59:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:07.569 03:59:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.569 03:59:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.569 03:59:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.569 03:59:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:07.569 03:59:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:07.569 03:59:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:07.569 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:27:09.470 03:59:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:09.470 03:59:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:09.470 03:59:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:09.470 03:59:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:09.470 03:59:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:09.470 03:59:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:09.470 03:59:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:09.470 03:59:28 -- nvmf/common.sh@294 -- # net_devs=() 00:27:09.470 03:59:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:09.470 03:59:28 -- nvmf/common.sh@295 -- # e810=() 00:27:09.470 03:59:28 -- nvmf/common.sh@295 -- # local -ga e810 00:27:09.470 03:59:28 -- nvmf/common.sh@296 -- # x722=() 00:27:09.470 03:59:28 -- nvmf/common.sh@296 -- # local -ga x722 00:27:09.470 03:59:28 -- nvmf/common.sh@297 -- # mlx=() 00:27:09.470 03:59:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:09.470 03:59:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.470 03:59:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:09.470 03:59:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:09.470 03:59:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:09.470 03:59:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:09.470 03:59:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:09.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:09.470 03:59:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:09.470 03:59:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:09.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:09.470 03:59:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:09.470 03:59:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:09.470 03:59:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.470 03:59:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:09.470 03:59:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.470 03:59:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:09.470 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:09.470 03:59:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.470 03:59:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:09.470 03:59:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.470 03:59:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:09.470 03:59:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.470 03:59:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:09.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:09.470 03:59:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.470 03:59:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:09.470 03:59:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:09.470 03:59:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:09.470 03:59:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:09.470 03:59:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.470 03:59:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.470 03:59:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.470 03:59:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:09.470 03:59:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.470 03:59:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.470 03:59:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:09.470 03:59:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.470 03:59:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.470 03:59:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:09.470 03:59:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:09.470 03:59:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.470 03:59:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.470 03:59:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.470 03:59:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.470 03:59:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:09.470 03:59:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.470 03:59:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.470 03:59:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.729 03:59:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:09.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:27:09.729 00:27:09.729 --- 10.0.0.2 ping statistics --- 00:27:09.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.729 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:09.729 03:59:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:27:09.729 00:27:09.729 --- 10.0.0.1 ping statistics --- 00:27:09.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.729 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:27:09.729 03:59:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.729 03:59:28 -- nvmf/common.sh@410 -- # return 0 00:27:09.729 03:59:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:09.729 03:59:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.729 03:59:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:09.729 03:59:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:09.729 03:59:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.729 03:59:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:09.729 03:59:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:09.729 03:59:28 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:09.729 03:59:28 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:09.729 03:59:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:09.729 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:27:09.729 03:59:28 -- host/fio.sh@24 -- # nvmfpid=2480663 00:27:09.729 03:59:28 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:09.729 03:59:28 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.729 03:59:28 -- host/fio.sh@28 -- # waitforlisten 2480663 00:27:09.729 03:59:28 -- common/autotest_common.sh@819 -- # '[' -z 2480663 ']' 00:27:09.729 03:59:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.729 03:59:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:09.729 03:59:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.729 03:59:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:09.729 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:27:09.729 [2024-07-14 03:59:28.496282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:09.729 [2024-07-14 03:59:28.496351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.729 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.729 [2024-07-14 03:59:28.559948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.729 [2024-07-14 03:59:28.643330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.729 [2024-07-14 03:59:28.643482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.729 [2024-07-14 03:59:28.643499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.729 [2024-07-14 03:59:28.643511] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.729 [2024-07-14 03:59:28.643566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.729 [2024-07-14 03:59:28.643636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.729 [2024-07-14 03:59:28.643691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.729 [2024-07-14 03:59:28.643693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.663 03:59:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:10.663 03:59:29 -- common/autotest_common.sh@852 -- # return 0 00:27:10.664 03:59:29 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:10.921 [2024-07-14 03:59:29.631017] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.921 03:59:29 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:10.921 03:59:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:10.921 03:59:29 -- common/autotest_common.sh@10 -- # set +x 00:27:10.921 03:59:29 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:11.179 Malloc1 00:27:11.179 03:59:29 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.436 03:59:30 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:11.694 03:59:30 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.951 [2024-07-14 03:59:30.673327] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.951 03:59:30 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.209 03:59:30 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:12.209 03:59:30 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.209 03:59:30 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.209 03:59:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:12.209 03:59:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.209 03:59:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:12.209 03:59:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.209 03:59:30 -- common/autotest_common.sh@1320 -- # shift 00:27:12.209 03:59:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:12.209 03:59:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:12.209 03:59:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:12.209 03:59:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:12.209 03:59:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:12.209 03:59:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:12.209 03:59:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:12.209 03:59:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:12.209 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:12.209 fio-3.35 00:27:12.209 Starting 1 thread 00:27:12.466 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.994 00:27:14.994 test: (groupid=0, jobs=1): err= 0: pid=2481094: Sun Jul 14 03:59:33 2024 00:27:14.994 read: IOPS=9576, BW=37.4MiB/s (39.2MB/s)(75.0MiB/2006msec) 00:27:14.994 slat (nsec): min=1893, max=159913, avg=2503.66, stdev=1904.43 00:27:14.994 clat (usec): min=3531, max=12415, avg=7397.73, stdev=546.26 00:27:14.994 lat (usec): min=3554, max=12417, avg=7400.23, stdev=546.16 00:27:14.994 clat percentiles (usec): 00:27:14.994 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6980], 00:27:14.994 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7504], 00:27:14.994 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:27:14.994 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10945], 99.95th=[11600], 00:27:14.994 | 99.99th=[12387] 00:27:14.994 bw ( KiB/s): min=37000, max=39160, per=99.92%, avg=38274.00, stdev=920.60, samples=4 00:27:14.994 iops : min= 9250, max= 9790, avg=9568.50, stdev=230.15, samples=4 00:27:14.994 write: IOPS=9583, BW=37.4MiB/s (39.3MB/s)(75.1MiB/2006msec); 0 zone resets 00:27:14.994 slat (usec): min=2, max=143, avg= 2.60, stdev= 1.58 00:27:14.994 clat (usec): min=1345, max=11481, avg=5933.36, stdev=487.22 00:27:14.994 lat (usec): min=1354, max=11484, avg=5935.96, stdev=487.17 00:27:14.994 clat percentiles (usec): 00:27:14.994 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:27:14.994 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:27:14.994 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:27:14.994 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 9634], 99.95th=[10814], 00:27:14.994 | 99.99th=[11338] 00:27:14.994 bw ( KiB/s): min=37864, max=38656, per=100.00%, avg=38340.00, stdev=338.18, samples=4 00:27:14.994 iops : min= 9466, max= 9664, avg=9585.00, stdev=84.55, samples=4 00:27:14.994 lat (msec) : 2=0.01%, 4=0.11%, 10=99.77%, 20=0.12% 00:27:14.994 cpu : usr=54.01%, sys=37.61%, ctx=64, majf=0, minf=5 00:27:14.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:14.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:14.994 issued rwts: total=19210,19224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:14.994 00:27:14.994 Run status group 0 (all jobs): 00:27:14.994 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=75.0MiB (78.7MB), run=2006-2006msec 00:27:14.994 WRITE: bw=37.4MiB/s (39.3MB/s), 37.4MiB/s-37.4MiB/s (39.3MB/s-39.3MB/s), io=75.1MiB (78.7MB), run=2006-2006msec 00:27:14.994 03:59:33 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:14.994 03:59:33 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:14.994 03:59:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:14.994 03:59:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:14.994 03:59:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:14.994 03:59:33 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.994 03:59:33 -- common/autotest_common.sh@1320 -- # shift 00:27:14.994 03:59:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:14.994 03:59:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:14.994 03:59:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:14.994 03:59:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:14.994 03:59:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:14.994 03:59:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:14.994 03:59:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:14.994 03:59:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:14.994 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:14.994 fio-3.35 00:27:14.994 Starting 1 thread 00:27:14.994 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.556 00:27:17.556 test: (groupid=0, jobs=1): err= 0: pid=2481506: Sun Jul 14 03:59:36 2024 00:27:17.556 read: IOPS=8121, BW=127MiB/s (133MB/s)(255MiB/2006msec) 00:27:17.556 slat (usec): min=2, max=103, avg= 3.75, stdev= 2.02 00:27:17.556 clat (usec): min=2571, max=18823, avg=9448.49, stdev=2548.75 00:27:17.556 lat (usec): min=2575, max=18826, avg=9452.24, stdev=2548.86 00:27:17.556 clat percentiles (usec): 00:27:17.556 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7242], 00:27:17.556 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:27:17.556 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12518], 95.00th=[13960], 00:27:17.556 | 99.00th=[16581], 99.50th=[17433], 99.90th=[18220], 99.95th=[18744], 00:27:17.556 | 99.99th=[18744] 00:27:17.557 bw ( KiB/s): min=58944, max=73312, per=51.26%, avg=66616.00, stdev=7179.91, samples=4 00:27:17.557 iops : min= 3684, max= 4582, avg=4163.50, stdev=448.74, samples=4 00:27:17.557 write: IOPS=4870, BW=76.1MiB/s (79.8MB/s)(136MiB/1788msec); 0 zone resets 00:27:17.557 slat (usec): min=30, max=191, avg=34.60, stdev= 6.36 00:27:17.557 clat (usec): min=4174, max=18208, avg=10753.24, stdev=1817.23 00:27:17.557 lat (usec): min=4221, max=18245, avg=10787.84, stdev=1817.59 00:27:17.557 clat percentiles (usec): 00:27:17.557 | 1.00th=[ 6915], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9241], 00:27:17.557 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:27:17.557 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13173], 95.00th=[14091], 00:27:17.557 | 99.00th=[15270], 99.50th=[15795], 99.90th=[16581], 99.95th=[16909], 00:27:17.557 | 99.99th=[18220] 00:27:17.557 bw ( KiB/s): min=61216, max=75904, per=88.86%, avg=69248.00, stdev=7702.23, samples=4 00:27:17.557 iops : min= 3826, max= 4744, avg=4328.00, stdev=481.39, samples=4 00:27:17.557 lat (msec) : 4=0.09%, 10=51.28%, 20=48.63% 00:27:17.557 cpu : usr=73.97%, sys=22.14%, ctx=19, majf=0, minf=1 00:27:17.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:17.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:17.557 issued rwts: total=16292,8709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:17.557 00:27:17.557 Run status group 0 (all jobs): 00:27:17.557 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (267MB), run=2006-2006msec 00:27:17.557 WRITE: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=136MiB (143MB), run=1788-1788msec 00:27:17.557 03:59:36 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.557 03:59:36 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:17.557 03:59:36 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:17.557 03:59:36 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:17.557 03:59:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:17.557 03:59:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:17.557 03:59:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:17.557 03:59:36 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:17.557 03:59:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:17.557 03:59:36 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:17.557 03:59:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:17.557 03:59:36 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:20.848 Nvme0n1 00:27:20.848 03:59:39 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:24.134 03:59:42 -- host/fio.sh@53 -- # ls_guid=d9400e7d-b5d1-40e4-bd7b-410f2b775466 00:27:24.134 03:59:42 -- host/fio.sh@54 -- # get_lvs_free_mb d9400e7d-b5d1-40e4-bd7b-410f2b775466 00:27:24.134 03:59:42 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d9400e7d-b5d1-40e4-bd7b-410f2b775466 00:27:24.134 03:59:42 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:24.134 03:59:42 -- common/autotest_common.sh@1345 -- # local fc 00:27:24.134 03:59:42 -- common/autotest_common.sh@1346 -- # local cs 00:27:24.134 03:59:42 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:24.134 03:59:42 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:24.134 { 00:27:24.134 "uuid": "d9400e7d-b5d1-40e4-bd7b-410f2b775466", 00:27:24.134 "name": "lvs_0", 00:27:24.134 "base_bdev": "Nvme0n1", 00:27:24.134 "total_data_clusters": 930, 00:27:24.134 "free_clusters": 930, 00:27:24.134 "block_size": 512, 00:27:24.134 "cluster_size": 1073741824 00:27:24.134 } 00:27:24.134 ]' 00:27:24.134 03:59:42 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d9400e7d-b5d1-40e4-bd7b-410f2b775466") .free_clusters' 00:27:24.134 03:59:42 -- common/autotest_common.sh@1348 -- # fc=930 00:27:24.134 03:59:42 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d9400e7d-b5d1-40e4-bd7b-410f2b775466") .cluster_size' 00:27:24.134 03:59:42 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:24.134 03:59:42 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:24.134 03:59:42 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:24.134 952320 00:27:24.134 03:59:42 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:24.409 8a37c1e9-4dca-4379-ab28-fa5322d7486c 00:27:24.409 03:59:43 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:24.671 03:59:43 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:24.671 03:59:43 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:24.927 03:59:43 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:24.927 03:59:43 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:24.927 03:59:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:24.927 03:59:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.927 03:59:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:24.927 03:59:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.927 03:59:43 -- common/autotest_common.sh@1320 -- # shift 00:27:24.927 03:59:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:24.927 03:59:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.927 03:59:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.927 03:59:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.927 03:59:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.927 03:59:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.927 03:59:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:24.927 03:59:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:25.184 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:25.184 fio-3.35 00:27:25.184 Starting 1 thread 00:27:25.184 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.708 00:27:27.708 test: (groupid=0, jobs=1): err= 0: pid=2482824: Sun Jul 14 03:59:46 2024 00:27:27.708 read: IOPS=6477, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2008msec) 00:27:27.708 slat (nsec): min=1895, max=144868, avg=2424.97, stdev=1926.27 00:27:27.708 clat (usec): min=1074, max=171259, avg=10891.45, stdev=11272.88 00:27:27.708 lat (usec): min=1077, max=171319, avg=10893.87, stdev=11273.15 00:27:27.708 clat percentiles (msec): 00:27:27.708 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:27:27.708 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:27:27.708 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:27:27.708 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:27:27.708 | 99.99th=[ 171] 00:27:27.708 bw ( KiB/s): min=18360, max=28680, per=99.85%, avg=25870.00, stdev=5013.71, samples=4 00:27:27.708 iops : min= 4590, max= 7170, avg=6467.50, stdev=1253.43, samples=4 00:27:27.708 write: IOPS=6484, BW=25.3MiB/s (26.6MB/s)(50.9MiB/2008msec); 0 zone resets 00:27:27.708 slat (nsec): min=2002, max=97610, avg=2518.66, stdev=1278.30 00:27:27.708 clat (usec): min=279, max=169649, avg=8729.90, stdev=10575.81 00:27:27.708 lat (usec): min=281, max=169656, avg=8732.42, stdev=10576.03 00:27:27.708 clat percentiles (msec): 00:27:27.708 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:27:27.708 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:27:27.708 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:27:27.708 | 99.00th=[ 10], 99.50th=[ 14], 99.90th=[ 169], 99.95th=[ 169], 00:27:27.708 | 99.99th=[ 169] 00:27:27.709 bw ( KiB/s): min=19368, max=28288, per=99.97%, avg=25930.00, stdev=4376.64, samples=4 00:27:27.709 iops : min= 4842, max= 7072, avg=6482.50, stdev=1094.16, samples=4 00:27:27.709 lat (usec) : 500=0.01%, 750=0.01% 00:27:27.709 lat (msec) : 2=0.03%, 4=0.17%, 10=72.01%, 20=27.29%, 250=0.49% 00:27:27.709 cpu : usr=55.95%, sys=38.12%, ctx=77, majf=0, minf=23 00:27:27.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:27.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:27.709 issued rwts: total=13006,13021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:27.709 00:27:27.709 Run status group 0 (all jobs): 00:27:27.709 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.3MB), run=2008-2008msec 00:27:27.709 WRITE: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=50.9MiB (53.3MB), run=2008-2008msec 00:27:27.709 03:59:46 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:27.967 03:59:46 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:28.901 03:59:47 -- host/fio.sh@64 -- # ls_nested_guid=95cc5ca5-b42f-4c3b-918c-444297099a5d 00:27:28.901 03:59:47 -- host/fio.sh@65 -- # get_lvs_free_mb 95cc5ca5-b42f-4c3b-918c-444297099a5d 00:27:28.901 03:59:47 -- common/autotest_common.sh@1343 -- # local lvs_uuid=95cc5ca5-b42f-4c3b-918c-444297099a5d 00:27:28.901 03:59:47 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:28.901 03:59:47 -- common/autotest_common.sh@1345 -- # local fc 00:27:28.901 03:59:47 -- common/autotest_common.sh@1346 -- # local cs 00:27:28.901 03:59:47 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:29.158 03:59:47 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:29.158 { 00:27:29.158 "uuid": "d9400e7d-b5d1-40e4-bd7b-410f2b775466", 00:27:29.158 "name": "lvs_0", 00:27:29.158 "base_bdev": "Nvme0n1", 00:27:29.158 "total_data_clusters": 930, 00:27:29.158 "free_clusters": 0, 00:27:29.158 "block_size": 512, 00:27:29.158 "cluster_size": 1073741824 00:27:29.158 }, 00:27:29.158 { 00:27:29.158 "uuid": "95cc5ca5-b42f-4c3b-918c-444297099a5d", 00:27:29.158 "name": "lvs_n_0", 00:27:29.158 "base_bdev": "8a37c1e9-4dca-4379-ab28-fa5322d7486c", 00:27:29.158 "total_data_clusters": 237847, 00:27:29.158 "free_clusters": 237847, 00:27:29.158 "block_size": 512, 00:27:29.158 "cluster_size": 4194304 00:27:29.158 } 00:27:29.158 ]' 00:27:29.158 03:59:47 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="95cc5ca5-b42f-4c3b-918c-444297099a5d") .free_clusters' 00:27:29.158 03:59:47 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:29.158 03:59:47 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="95cc5ca5-b42f-4c3b-918c-444297099a5d") .cluster_size' 00:27:29.158 03:59:48 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:29.158 03:59:48 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:29.158 03:59:48 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:29.158 951388 00:27:29.158 03:59:48 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:30.094 c771bfd6-0c4d-40ef-a32f-4c767531e7a1 00:27:30.094 03:59:48 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:30.094 03:59:48 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:30.351 03:59:49 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:30.609 03:59:49 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:30.609 03:59:49 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:30.609 03:59:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:30.609 03:59:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.609 03:59:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:30.609 03:59:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.609 03:59:49 -- common/autotest_common.sh@1320 -- # shift 00:27:30.609 03:59:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:30.609 03:59:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:30.609 03:59:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:30.609 03:59:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:30.609 03:59:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:30.609 03:59:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:30.609 03:59:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:30.609 03:59:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:30.866 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:30.866 fio-3.35 00:27:30.866 Starting 1 thread 00:27:30.866 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.393 00:27:33.393 test: (groupid=0, jobs=1): err= 0: pid=2483572: Sun Jul 14 03:59:52 2024 00:27:33.393 read: IOPS=6027, BW=23.5MiB/s (24.7MB/s)(47.3MiB/2008msec) 00:27:33.393 slat (nsec): min=1952, max=129745, avg=2445.23, stdev=1832.72 00:27:33.393 clat (usec): min=4405, max=18384, avg=11790.99, stdev=1184.42 00:27:33.393 lat (usec): min=4410, max=18387, avg=11793.44, stdev=1184.34 00:27:33.393 clat percentiles (usec): 00:27:33.393 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:27:33.393 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:27:33.393 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13042], 95.00th=[13566], 00:27:33.393 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:27:33.393 | 99.99th=[18482] 00:27:33.393 bw ( KiB/s): min=22104, max=24888, per=99.83%, avg=24068.00, stdev=1316.50, samples=4 00:27:33.393 iops : min= 5526, max= 6222, avg=6017.00, stdev=329.13, samples=4 00:27:33.393 write: IOPS=6010, BW=23.5MiB/s (24.6MB/s)(47.1MiB/2008msec); 0 zone resets 00:27:33.393 slat (usec): min=2, max=110, avg= 2.53, stdev= 1.51 00:27:33.393 clat (usec): min=2163, max=16762, avg=9364.31, stdev=1057.52 00:27:33.393 lat (usec): min=2169, max=16764, avg=9366.84, stdev=1057.47 00:27:33.393 clat percentiles (usec): 00:27:33.393 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8586], 00:27:33.393 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:27:33.393 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:27:33.393 | 99.00th=[13304], 99.50th=[13960], 99.90th=[15139], 99.95th=[15270], 00:27:33.393 | 99.99th=[15533] 00:27:33.393 bw ( KiB/s): min=23128, max=24384, per=99.92%, avg=24022.00, stdev=598.29, samples=4 00:27:33.393 iops : min= 5782, max= 6096, avg=6005.50, stdev=149.57, samples=4 00:27:33.393 lat (msec) : 4=0.05%, 10=41.60%, 20=58.35% 00:27:33.393 cpu : usr=53.51%, sys=41.06%, ctx=94, majf=0, minf=23 00:27:33.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:33.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:33.393 issued rwts: total=12103,12069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:33.393 00:27:33.393 Run status group 0 (all jobs): 00:27:33.393 READ: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.6MB), run=2008-2008msec 00:27:33.393 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2008-2008msec 00:27:33.393 03:59:52 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:33.393 03:59:52 -- host/fio.sh@74 -- # sync 00:27:33.393 03:59:52 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:37.607 03:59:56 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:37.607 03:59:56 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:40.895 03:59:59 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:40.895 03:59:59 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:42.795 04:00:01 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:42.795 04:00:01 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:42.795 04:00:01 -- host/fio.sh@86 -- # nvmftestfini 00:27:42.795 04:00:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:42.795 04:00:01 -- nvmf/common.sh@116 -- # sync 00:27:42.795 04:00:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:42.795 04:00:01 -- nvmf/common.sh@119 -- # set +e 00:27:42.795 04:00:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:42.795 04:00:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:42.795 rmmod nvme_tcp 00:27:42.795 rmmod nvme_fabrics 00:27:42.795 rmmod nvme_keyring 00:27:42.795 04:00:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:42.795 04:00:01 -- nvmf/common.sh@123 -- # set -e 00:27:42.795 04:00:01 -- nvmf/common.sh@124 -- # return 0 00:27:42.795 04:00:01 -- nvmf/common.sh@477 -- # '[' -n 2480663 ']' 00:27:42.795 04:00:01 -- nvmf/common.sh@478 -- # killprocess 2480663 00:27:42.795 04:00:01 -- common/autotest_common.sh@926 -- # '[' -z 2480663 ']' 00:27:42.795 04:00:01 -- common/autotest_common.sh@930 -- # kill -0 2480663 00:27:42.795 04:00:01 -- common/autotest_common.sh@931 -- # uname 00:27:42.795 04:00:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:42.795 04:00:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2480663 00:27:42.795 04:00:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:42.795 04:00:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:42.795 04:00:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2480663' 00:27:42.795 killing process with pid 2480663 00:27:42.795 04:00:01 -- common/autotest_common.sh@945 -- # kill 2480663 00:27:42.795 04:00:01 -- common/autotest_common.sh@950 -- # wait 2480663 00:27:42.795 04:00:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:42.795 04:00:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:42.795 04:00:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:42.795 04:00:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.795 04:00:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:42.795 04:00:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.795 04:00:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.795 04:00:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.333 04:00:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:45.333 00:27:45.334 real 0m37.553s 00:27:45.334 user 2m23.858s 00:27:45.334 sys 0m7.035s 00:27:45.334 04:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.334 04:00:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.334 ************************************ 00:27:45.334 END TEST nvmf_fio_host 00:27:45.334 ************************************ 00:27:45.334 04:00:03 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:45.334 04:00:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:45.334 04:00:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:45.334 04:00:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.334 ************************************ 00:27:45.334 START TEST nvmf_failover 00:27:45.334 ************************************ 00:27:45.334 04:00:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:45.334 * Looking for test storage... 00:27:45.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.334 04:00:03 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.334 04:00:03 -- nvmf/common.sh@7 -- # uname -s 00:27:45.334 04:00:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.334 04:00:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.334 04:00:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.334 04:00:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.334 04:00:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.334 04:00:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.334 04:00:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.334 04:00:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.334 04:00:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.334 04:00:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.334 04:00:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.334 04:00:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.334 04:00:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.334 04:00:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.334 04:00:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.334 04:00:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.334 04:00:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.334 04:00:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.334 04:00:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.334 04:00:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.334 04:00:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.334 04:00:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.334 04:00:03 -- paths/export.sh@5 -- # export PATH 00:27:45.334 04:00:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.334 04:00:03 -- nvmf/common.sh@46 -- # : 0 00:27:45.334 04:00:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:45.334 04:00:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:45.334 04:00:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:45.334 04:00:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.334 04:00:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.334 04:00:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:45.334 04:00:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:45.334 04:00:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:45.334 04:00:03 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:45.334 04:00:03 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:45.334 04:00:03 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:45.334 04:00:03 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:45.334 04:00:03 -- host/failover.sh@18 -- # nvmftestinit 00:27:45.334 04:00:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:45.334 04:00:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.334 04:00:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:45.334 04:00:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:45.334 04:00:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:45.334 04:00:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.334 04:00:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.334 04:00:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.334 04:00:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:45.334 04:00:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:45.334 04:00:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:45.334 04:00:03 -- common/autotest_common.sh@10 -- # set +x 00:27:47.230 04:00:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:47.230 04:00:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:47.230 04:00:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:47.230 04:00:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:47.230 04:00:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:47.230 04:00:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:47.230 04:00:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:47.230 04:00:05 -- nvmf/common.sh@294 -- # net_devs=() 00:27:47.230 04:00:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:47.230 04:00:05 -- nvmf/common.sh@295 -- # e810=() 00:27:47.230 04:00:05 -- nvmf/common.sh@295 -- # local -ga e810 00:27:47.230 04:00:05 -- nvmf/common.sh@296 -- # x722=() 00:27:47.230 04:00:05 -- nvmf/common.sh@296 -- # local -ga x722 00:27:47.230 04:00:05 -- nvmf/common.sh@297 -- # mlx=() 00:27:47.230 04:00:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:47.230 04:00:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.230 04:00:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:47.230 04:00:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:47.230 04:00:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:47.230 04:00:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:47.230 04:00:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:47.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:47.230 04:00:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:47.230 04:00:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:47.230 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:47.230 04:00:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:47.230 04:00:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:47.230 04:00:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.230 04:00:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:47.230 04:00:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.230 04:00:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:47.230 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:47.230 04:00:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.230 04:00:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:47.230 04:00:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.230 04:00:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:47.230 04:00:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.230 04:00:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:47.230 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:47.230 04:00:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.230 04:00:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:47.230 04:00:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:47.230 04:00:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:47.230 04:00:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.230 04:00:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.230 04:00:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.230 04:00:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:47.230 04:00:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.230 04:00:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.230 04:00:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:47.230 04:00:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.230 04:00:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.230 04:00:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:47.230 04:00:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:47.230 04:00:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.230 04:00:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.230 04:00:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.230 04:00:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.230 04:00:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:47.230 04:00:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.230 04:00:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.230 04:00:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.230 04:00:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:47.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:27:47.230 00:27:47.230 --- 10.0.0.2 ping statistics --- 00:27:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.230 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:27:47.230 04:00:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:27:47.230 00:27:47.230 --- 10.0.0.1 ping statistics --- 00:27:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.230 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:27:47.230 04:00:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.230 04:00:05 -- nvmf/common.sh@410 -- # return 0 00:27:47.230 04:00:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:47.230 04:00:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.230 04:00:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:47.230 04:00:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.230 04:00:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:47.230 04:00:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:47.230 04:00:05 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:47.230 04:00:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:47.230 04:00:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:47.230 04:00:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.230 04:00:05 -- nvmf/common.sh@469 -- # nvmfpid=2486983 00:27:47.230 04:00:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:47.230 04:00:05 -- nvmf/common.sh@470 -- # waitforlisten 2486983 00:27:47.230 04:00:05 -- common/autotest_common.sh@819 -- # '[' -z 2486983 ']' 00:27:47.230 04:00:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.230 04:00:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:47.230 04:00:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.230 04:00:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:47.230 04:00:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.230 [2024-07-14 04:00:05.993367] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:47.230 [2024-07-14 04:00:05.993443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.230 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.230 [2024-07-14 04:00:06.061771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:47.230 [2024-07-14 04:00:06.148701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:47.230 [2024-07-14 04:00:06.148876] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.230 [2024-07-14 04:00:06.148896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.230 [2024-07-14 04:00:06.148920] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.230 [2024-07-14 04:00:06.149021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.230 [2024-07-14 04:00:06.149063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.230 [2024-07-14 04:00:06.149066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.160 04:00:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:48.160 04:00:06 -- common/autotest_common.sh@852 -- # return 0 00:27:48.160 04:00:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:48.160 04:00:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:48.160 04:00:06 -- common/autotest_common.sh@10 -- # set +x 00:27:48.160 04:00:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.160 04:00:06 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:48.416 [2024-07-14 04:00:07.231872] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.416 04:00:07 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:48.674 Malloc0 00:27:48.674 04:00:07 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.931 04:00:07 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:49.193 04:00:08 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.454 [2024-07-14 04:00:08.234607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.454 04:00:08 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:49.712 [2024-07-14 04:00:08.479352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:49.712 04:00:08 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:49.970 [2024-07-14 04:00:08.740241] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:49.970 04:00:08 -- host/failover.sh@31 -- # bdevperf_pid=2487898 00:27:49.970 04:00:08 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:49.970 04:00:08 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:49.970 04:00:08 -- host/failover.sh@34 -- # waitforlisten 2487898 /var/tmp/bdevperf.sock 00:27:49.970 04:00:08 -- common/autotest_common.sh@819 -- # '[' -z 2487898 ']' 00:27:49.970 04:00:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.970 04:00:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:49.970 04:00:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.970 04:00:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:49.970 04:00:08 -- common/autotest_common.sh@10 -- # set +x 00:27:50.914 04:00:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:50.914 04:00:09 -- common/autotest_common.sh@852 -- # return 0 00:27:50.914 04:00:09 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.482 NVMe0n1 00:27:51.482 04:00:10 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.741 00:27:51.741 04:00:10 -- host/failover.sh@39 -- # run_test_pid=2488065 00:27:51.741 04:00:10 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:51.741 04:00:10 -- host/failover.sh@41 -- # sleep 1 00:27:52.672 04:00:11 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.929 [2024-07-14 04:00:11.700186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.929 [2024-07-14 04:00:11.700445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 [2024-07-14 04:00:11.700925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da2540 is same with the state(5) to be set 00:27:52.930 04:00:11 -- host/failover.sh@45 -- # sleep 3 00:27:56.269 04:00:14 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.269 00:27:56.269 04:00:15 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:56.529 [2024-07-14 04:00:15.430008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.529 [2024-07-14 04:00:15.430647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.530 [2024-07-14 04:00:15.430658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.530 [2024-07-14 04:00:15.430670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da33d0 is same with the state(5) to be set 00:27:56.530 04:00:15 -- host/failover.sh@50 -- # sleep 3 00:27:59.819 04:00:18 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.819 [2024-07-14 04:00:18.674145] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.819 04:00:18 -- host/failover.sh@55 -- # sleep 1 00:28:00.757 04:00:19 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:01.015 [2024-07-14 04:00:19.948789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.948887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.948919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.948942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.948954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.948966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.948977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.948989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.015 [2024-07-14 04:00:19.949148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.016 [2024-07-14 04:00:19.949580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da3f40 is same with the state(5) to be set 00:28:01.273 04:00:19 -- host/failover.sh@59 -- # wait 2488065 00:28:07.843 0 00:28:07.843 04:00:25 -- host/failover.sh@61 -- # killprocess 2487898 00:28:07.843 04:00:25 -- common/autotest_common.sh@926 -- # '[' -z 2487898 ']' 00:28:07.843 04:00:25 -- common/autotest_common.sh@930 -- # kill -0 2487898 00:28:07.843 04:00:25 -- common/autotest_common.sh@931 -- # uname 00:28:07.843 04:00:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:07.843 04:00:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2487898 00:28:07.843 04:00:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:07.843 04:00:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:07.843 04:00:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2487898' 00:28:07.843 killing process with pid 2487898 00:28:07.843 04:00:25 -- common/autotest_common.sh@945 -- # kill 2487898 00:28:07.843 04:00:25 -- common/autotest_common.sh@950 -- # wait 2487898 00:28:07.843 04:00:25 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:07.843 [2024-07-14 04:00:08.801500] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:07.843 [2024-07-14 04:00:08.801583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487898 ] 00:28:07.843 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.843 [2024-07-14 04:00:08.863922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.843 [2024-07-14 04:00:08.948557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.843 Running I/O for 15 seconds... 00:28:07.843 [2024-07-14 04:00:11.701304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.843 [2024-07-14 04:00:11.701350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.843 [2024-07-14 04:00:11.701381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.843 [2024-07-14 04:00:11.701398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.843 [2024-07-14 04:00:11.701416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.843 [2024-07-14 04:00:11.701431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.843 [2024-07-14 04:00:11.701447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.701980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.701995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.844 [2024-07-14 04:00:11.702552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.844 [2024-07-14 04:00:11.702700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.844 [2024-07-14 04:00:11.702716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.702981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.702997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.845 [2024-07-14 04:00:11.703884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.845 [2024-07-14 04:00:11.703943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.845 [2024-07-14 04:00:11.703958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.703973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.703988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.846 [2024-07-14 04:00:11.704966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.704981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.704996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.705011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.705025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.705041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.705055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.705070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.705084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.705100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.705114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.846 [2024-07-14 04:00:11.705132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.846 [2024-07-14 04:00:11.705147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:11.705176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:11.705206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138e320 is same with the state(5) to be set 00:28:07.847 [2024-07-14 04:00:11.705239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:07.847 [2024-07-14 04:00:11.705250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:07.847 [2024-07-14 04:00:11.705262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118344 len:8 PRP1 0x0 PRP2 0x0 00:28:07.847 [2024-07-14 04:00:11.705275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705344] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x138e320 was disconnected and freed. reset controller. 00:28:07.847 [2024-07-14 04:00:11.705373] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:07.847 [2024-07-14 04:00:11.705410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.847 [2024-07-14 04:00:11.705429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.847 [2024-07-14 04:00:11.705457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.847 [2024-07-14 04:00:11.705484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.847 [2024-07-14 04:00:11.705511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:11.705525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.847 [2024-07-14 04:00:11.707814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.847 [2024-07-14 04:00:11.707854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136f790 (9): Bad file descriptor 00:28:07.847 [2024-07-14 04:00:11.782453] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:07.847 [2024-07-14 04:00:15.430848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.430924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.430959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.430977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.430994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-07-14 04:00:15.431697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-07-14 04:00:15.431729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.847 [2024-07-14 04:00:15.431759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-07-14 04:00:15.431787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.847 [2024-07-14 04:00:15.431815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.847 [2024-07-14 04:00:15.431830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.431843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.431880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.431896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.431912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.431938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.431954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.431968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.431983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.431997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.848 [2024-07-14 04:00:15.432576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.848 [2024-07-14 04:00:15.432802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.848 [2024-07-14 04:00:15.432817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.432831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.432846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.432859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.432899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.432926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.432943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.432972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.432986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.433814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.433979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.433993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.434008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.434022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.434037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.434051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.434070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.849 [2024-07-14 04:00:15.434085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.434100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.849 [2024-07-14 04:00:15.434114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.849 [2024-07-14 04:00:15.434129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.850 [2024-07-14 04:00:15.434541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:15.434816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137bcb0 is same with the state(5) to be set 00:28:07.850 [2024-07-14 04:00:15.434852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:07.850 [2024-07-14 04:00:15.434870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:07.850 [2024-07-14 04:00:15.434889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:8 PRP1 0x0 PRP2 0x0 00:28:07.850 [2024-07-14 04:00:15.434903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.434968] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137bcb0 was disconnected and freed. reset controller. 00:28:07.850 [2024-07-14 04:00:15.434987] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:07.850 [2024-07-14 04:00:15.435021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.850 [2024-07-14 04:00:15.435039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.435054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.850 [2024-07-14 04:00:15.435067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.435080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.850 [2024-07-14 04:00:15.435093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.435106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.850 [2024-07-14 04:00:15.435119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:15.435132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.850 [2024-07-14 04:00:15.437329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.850 [2024-07-14 04:00:15.437371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136f790 (9): Bad file descriptor 00:28:07.850 [2024-07-14 04:00:15.556311] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:07.850 [2024-07-14 04:00:19.949784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.949826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.949872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.949890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.949908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.949923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.949939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.949954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.949969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.949989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.950005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.950019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.950035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.950049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.950064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.950079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.950094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.950108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.950123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.850 [2024-07-14 04:00:19.950137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.850 [2024-07-14 04:00:19.950153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.950979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.950994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.851 [2024-07-14 04:00:19.951155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.851 [2024-07-14 04:00:19.951267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.851 [2024-07-14 04:00:19.951354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.851 [2024-07-14 04:00:19.951412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.851 [2024-07-14 04:00:19.951445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.851 [2024-07-14 04:00:19.951475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.851 [2024-07-14 04:00:19.951548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.851 [2024-07-14 04:00:19.951561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.951817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.951864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.951920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.951949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.951964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.951984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.852 [2024-07-14 04:00:19.952668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.852 [2024-07-14 04:00:19.952775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.852 [2024-07-14 04:00:19.952800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.952821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.952861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.952904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.952926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.952943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.952958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.952977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.952992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.853 [2024-07-14 04:00:19.953819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.953976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.953991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.954005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.954020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.853 [2024-07-14 04:00:19.954034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.954057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392230 is same with the state(5) to be set 00:28:07.853 [2024-07-14 04:00:19.954075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:07.853 [2024-07-14 04:00:19.954087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:07.853 [2024-07-14 04:00:19.954099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:376 len:8 PRP1 0x0 PRP2 0x0 00:28:07.853 [2024-07-14 04:00:19.954112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.954184] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1392230 was disconnected and freed. reset controller. 00:28:07.853 [2024-07-14 04:00:19.954202] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:07.853 [2024-07-14 04:00:19.954236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.853 [2024-07-14 04:00:19.954254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.954273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.853 [2024-07-14 04:00:19.954288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.853 [2024-07-14 04:00:19.954302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.854 [2024-07-14 04:00:19.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.854 [2024-07-14 04:00:19.954329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.854 [2024-07-14 04:00:19.954341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.854 [2024-07-14 04:00:19.954354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.854 [2024-07-14 04:00:19.954410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136f790 (9): Bad file descriptor 00:28:07.854 [2024-07-14 04:00:19.956624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.854 [2024-07-14 04:00:19.991630] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:07.854 00:28:07.854 Latency(us) 00:28:07.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.854 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:07.854 Verification LBA range: start 0x0 length 0x4000 00:28:07.854 NVMe0n1 : 15.01 12913.74 50.44 886.37 0.00 9258.54 694.80 14660.65 00:28:07.854 =================================================================================================================== 00:28:07.854 Total : 12913.74 50.44 886.37 0.00 9258.54 694.80 14660.65 00:28:07.854 Received shutdown signal, test time was about 15.000000 seconds 00:28:07.854 00:28:07.854 Latency(us) 00:28:07.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.854 =================================================================================================================== 00:28:07.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.854 04:00:25 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:07.854 04:00:25 -- host/failover.sh@65 -- # count=3 00:28:07.854 04:00:25 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:07.854 04:00:25 -- host/failover.sh@73 -- # bdevperf_pid=2489965 00:28:07.854 04:00:25 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:07.854 04:00:25 -- host/failover.sh@75 -- # waitforlisten 2489965 /var/tmp/bdevperf.sock 00:28:07.854 04:00:25 -- common/autotest_common.sh@819 -- # '[' -z 2489965 ']' 00:28:07.854 04:00:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:07.854 04:00:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:07.854 04:00:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:07.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:07.854 04:00:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:07.854 04:00:25 -- common/autotest_common.sh@10 -- # set +x 00:28:08.111 04:00:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:08.111 04:00:26 -- common/autotest_common.sh@852 -- # return 0 00:28:08.111 04:00:26 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:08.368 [2024-07-14 04:00:27.063664] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:08.368 04:00:27 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:08.368 [2024-07-14 04:00:27.300354] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:08.625 04:00:27 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:08.883 NVMe0n1 00:28:08.883 04:00:27 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.141 00:28:09.141 04:00:28 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.401 00:28:09.660 04:00:28 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:09.660 04:00:28 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:09.660 04:00:28 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.919 04:00:28 -- host/failover.sh@87 -- # sleep 3 00:28:13.206 04:00:31 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:13.206 04:00:31 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:13.206 04:00:32 -- host/failover.sh@90 -- # run_test_pid=2490658 00:28:13.206 04:00:32 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:13.206 04:00:32 -- host/failover.sh@92 -- # wait 2490658 00:28:14.579 0 00:28:14.579 04:00:33 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:14.579 [2024-07-14 04:00:25.899369] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:14.579 [2024-07-14 04:00:25.899462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489965 ] 00:28:14.579 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.579 [2024-07-14 04:00:25.959840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.579 [2024-07-14 04:00:26.041378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.579 [2024-07-14 04:00:28.780314] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:14.579 [2024-07-14 04:00:28.780398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.579 [2024-07-14 04:00:28.780422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.579 [2024-07-14 04:00:28.780453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.579 [2024-07-14 04:00:28.780467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.579 [2024-07-14 04:00:28.780480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.579 [2024-07-14 04:00:28.780494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.579 [2024-07-14 04:00:28.780507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.579 [2024-07-14 04:00:28.780521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.580 [2024-07-14 04:00:28.780534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:14.580 [2024-07-14 04:00:28.780571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:14.580 [2024-07-14 04:00:28.780603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03790 (9): Bad file descriptor 00:28:14.580 [2024-07-14 04:00:28.801396] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:14.580 Running I/O for 1 seconds... 00:28:14.580 00:28:14.580 Latency(us) 00:28:14.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.580 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:14.580 Verification LBA range: start 0x0 length 0x4000 00:28:14.580 NVMe0n1 : 1.01 12910.72 50.43 0.00 0.00 9870.01 1110.47 13883.92 00:28:14.580 =================================================================================================================== 00:28:14.580 Total : 12910.72 50.43 0.00 0.00 9870.01 1110.47 13883.92 00:28:14.580 04:00:33 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:14.580 04:00:33 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:14.580 04:00:33 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:14.864 04:00:33 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:14.864 04:00:33 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:15.121 04:00:33 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:15.380 04:00:34 -- host/failover.sh@101 -- # sleep 3 00:28:18.664 04:00:37 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:18.664 04:00:37 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:18.664 04:00:37 -- host/failover.sh@108 -- # killprocess 2489965 00:28:18.664 04:00:37 -- common/autotest_common.sh@926 -- # '[' -z 2489965 ']' 00:28:18.664 04:00:37 -- common/autotest_common.sh@930 -- # kill -0 2489965 00:28:18.664 04:00:37 -- common/autotest_common.sh@931 -- # uname 00:28:18.664 04:00:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:18.664 04:00:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2489965 00:28:18.664 04:00:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:18.664 04:00:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:18.664 04:00:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2489965' 00:28:18.664 killing process with pid 2489965 00:28:18.664 04:00:37 -- common/autotest_common.sh@945 -- # kill 2489965 00:28:18.664 04:00:37 -- common/autotest_common.sh@950 -- # wait 2489965 00:28:18.922 04:00:37 -- host/failover.sh@110 -- # sync 00:28:18.922 04:00:37 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:19.181 04:00:37 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:19.181 04:00:37 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:19.181 04:00:37 -- host/failover.sh@116 -- # nvmftestfini 00:28:19.181 04:00:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:19.181 04:00:37 -- nvmf/common.sh@116 -- # sync 00:28:19.181 04:00:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:19.181 04:00:37 -- nvmf/common.sh@119 -- # set +e 00:28:19.181 04:00:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:19.181 04:00:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:19.181 rmmod nvme_tcp 00:28:19.181 rmmod nvme_fabrics 00:28:19.181 rmmod nvme_keyring 00:28:19.181 04:00:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:19.181 04:00:38 -- nvmf/common.sh@123 -- # set -e 00:28:19.181 04:00:38 -- nvmf/common.sh@124 -- # return 0 00:28:19.181 04:00:38 -- nvmf/common.sh@477 -- # '[' -n 2486983 ']' 00:28:19.181 04:00:38 -- nvmf/common.sh@478 -- # killprocess 2486983 00:28:19.181 04:00:38 -- common/autotest_common.sh@926 -- # '[' -z 2486983 ']' 00:28:19.181 04:00:38 -- common/autotest_common.sh@930 -- # kill -0 2486983 00:28:19.181 04:00:38 -- common/autotest_common.sh@931 -- # uname 00:28:19.181 04:00:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.181 04:00:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2486983 00:28:19.181 04:00:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:19.181 04:00:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:19.181 04:00:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2486983' 00:28:19.181 killing process with pid 2486983 00:28:19.181 04:00:38 -- common/autotest_common.sh@945 -- # kill 2486983 00:28:19.181 04:00:38 -- common/autotest_common.sh@950 -- # wait 2486983 00:28:19.440 04:00:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:19.440 04:00:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:19.440 04:00:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:19.440 04:00:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.440 04:00:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:19.440 04:00:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.440 04:00:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.440 04:00:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.975 04:00:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:21.975 00:28:21.975 real 0m36.547s 00:28:21.975 user 2m6.377s 00:28:21.975 sys 0m7.149s 00:28:21.975 04:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.975 04:00:40 -- common/autotest_common.sh@10 -- # set +x 00:28:21.975 ************************************ 00:28:21.975 END TEST nvmf_failover 00:28:21.975 ************************************ 00:28:21.975 04:00:40 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:21.975 04:00:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:21.975 04:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:21.975 04:00:40 -- common/autotest_common.sh@10 -- # set +x 00:28:21.975 ************************************ 00:28:21.975 START TEST nvmf_discovery 00:28:21.975 ************************************ 00:28:21.975 04:00:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:21.975 * Looking for test storage... 00:28:21.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:21.975 04:00:40 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.975 04:00:40 -- nvmf/common.sh@7 -- # uname -s 00:28:21.975 04:00:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.975 04:00:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.975 04:00:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.975 04:00:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.975 04:00:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.975 04:00:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.975 04:00:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.975 04:00:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.975 04:00:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.975 04:00:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.975 04:00:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:21.975 04:00:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:21.975 04:00:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.975 04:00:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.975 04:00:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.975 04:00:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.976 04:00:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.976 04:00:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.976 04:00:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.976 04:00:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.976 04:00:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.976 04:00:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.976 04:00:40 -- paths/export.sh@5 -- # export PATH 00:28:21.976 04:00:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.976 04:00:40 -- nvmf/common.sh@46 -- # : 0 00:28:21.976 04:00:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:21.976 04:00:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:21.976 04:00:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:21.976 04:00:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.976 04:00:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.976 04:00:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:21.976 04:00:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:21.976 04:00:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:21.976 04:00:40 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:21.976 04:00:40 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:21.976 04:00:40 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:21.976 04:00:40 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:21.976 04:00:40 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:21.976 04:00:40 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:21.976 04:00:40 -- host/discovery.sh@25 -- # nvmftestinit 00:28:21.976 04:00:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:21.976 04:00:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.976 04:00:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:21.976 04:00:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:21.976 04:00:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:21.976 04:00:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.976 04:00:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.976 04:00:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.976 04:00:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:21.976 04:00:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:21.976 04:00:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:21.976 04:00:40 -- common/autotest_common.sh@10 -- # set +x 00:28:23.876 04:00:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:23.876 04:00:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:23.876 04:00:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:23.876 04:00:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:23.876 04:00:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:23.876 04:00:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:23.876 04:00:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:23.876 04:00:42 -- nvmf/common.sh@294 -- # net_devs=() 00:28:23.876 04:00:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:23.876 04:00:42 -- nvmf/common.sh@295 -- # e810=() 00:28:23.876 04:00:42 -- nvmf/common.sh@295 -- # local -ga e810 00:28:23.876 04:00:42 -- nvmf/common.sh@296 -- # x722=() 00:28:23.876 04:00:42 -- nvmf/common.sh@296 -- # local -ga x722 00:28:23.876 04:00:42 -- nvmf/common.sh@297 -- # mlx=() 00:28:23.876 04:00:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:23.876 04:00:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.876 04:00:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:23.876 04:00:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:23.876 04:00:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:23.876 04:00:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:23.876 04:00:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:23.876 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:23.876 04:00:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:23.876 04:00:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:23.876 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:23.876 04:00:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:23.876 04:00:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:23.876 04:00:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.876 04:00:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:23.876 04:00:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.876 04:00:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:23.876 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:23.876 04:00:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.876 04:00:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:23.876 04:00:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.876 04:00:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:23.876 04:00:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.876 04:00:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:23.876 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:23.876 04:00:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.876 04:00:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:23.876 04:00:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:23.876 04:00:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:23.876 04:00:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.876 04:00:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.876 04:00:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.876 04:00:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:23.876 04:00:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.876 04:00:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.876 04:00:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:23.876 04:00:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.876 04:00:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.876 04:00:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:23.876 04:00:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:23.876 04:00:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.876 04:00:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.876 04:00:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.876 04:00:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.876 04:00:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:23.876 04:00:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.876 04:00:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.876 04:00:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.876 04:00:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:23.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:28:23.876 00:28:23.876 --- 10.0.0.2 ping statistics --- 00:28:23.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.876 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:28:23.876 04:00:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:28:23.876 00:28:23.876 --- 10.0.0.1 ping statistics --- 00:28:23.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.876 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:28:23.876 04:00:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.876 04:00:42 -- nvmf/common.sh@410 -- # return 0 00:28:23.876 04:00:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:23.876 04:00:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.876 04:00:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:23.876 04:00:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.876 04:00:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:23.876 04:00:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:23.876 04:00:42 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:23.876 04:00:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:23.876 04:00:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:23.876 04:00:42 -- common/autotest_common.sh@10 -- # set +x 00:28:23.876 04:00:42 -- nvmf/common.sh@469 -- # nvmfpid=2493403 00:28:23.876 04:00:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:23.876 04:00:42 -- nvmf/common.sh@470 -- # waitforlisten 2493403 00:28:23.876 04:00:42 -- common/autotest_common.sh@819 -- # '[' -z 2493403 ']' 00:28:23.876 04:00:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.876 04:00:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:23.876 04:00:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.876 04:00:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:23.876 04:00:42 -- common/autotest_common.sh@10 -- # set +x 00:28:23.876 [2024-07-14 04:00:42.546538] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:23.876 [2024-07-14 04:00:42.546605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.876 [2024-07-14 04:00:42.613436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.876 [2024-07-14 04:00:42.702275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:23.876 [2024-07-14 04:00:42.702417] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.876 [2024-07-14 04:00:42.702435] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.876 [2024-07-14 04:00:42.702447] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.876 [2024-07-14 04:00:42.702483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.809 04:00:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:24.809 04:00:43 -- common/autotest_common.sh@852 -- # return 0 00:28:24.809 04:00:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:24.809 04:00:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:24.809 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.809 04:00:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.809 04:00:43 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.809 04:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.809 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.809 [2024-07-14 04:00:43.548958] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.809 04:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.809 04:00:43 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:24.809 04:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.809 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.809 [2024-07-14 04:00:43.557094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:24.809 04:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.809 04:00:43 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:24.809 04:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.809 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.809 null0 00:28:24.809 04:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.809 04:00:43 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:24.809 04:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.809 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.809 null1 00:28:24.809 04:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.809 04:00:43 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:24.809 04:00:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.809 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.809 04:00:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.809 04:00:43 -- host/discovery.sh@45 -- # hostpid=2493498 00:28:24.809 04:00:43 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:24.809 04:00:43 -- host/discovery.sh@46 -- # waitforlisten 2493498 /tmp/host.sock 00:28:24.809 04:00:43 -- common/autotest_common.sh@819 -- # '[' -z 2493498 ']' 00:28:24.809 04:00:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:24.809 04:00:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:24.810 04:00:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:24.810 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:24.810 04:00:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:24.810 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 [2024-07-14 04:00:43.626401] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:24.810 [2024-07-14 04:00:43.626478] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493498 ] 00:28:24.810 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.810 [2024-07-14 04:00:43.690038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.069 [2024-07-14 04:00:43.783511] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:25.069 [2024-07-14 04:00:43.783675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.004 04:00:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:26.004 04:00:44 -- common/autotest_common.sh@852 -- # return 0 00:28:26.004 04:00:44 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:26.004 04:00:44 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:26.004 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.004 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.004 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.004 04:00:44 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:26.004 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.004 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.004 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.004 04:00:44 -- host/discovery.sh@72 -- # notify_id=0 00:28:26.004 04:00:44 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:26.004 04:00:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.004 04:00:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # sort 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # xargs 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:26.005 04:00:44 -- host/discovery.sh@79 -- # get_bdev_list 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # sort 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # xargs 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:26.005 04:00:44 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # sort 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # xargs 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:26.005 04:00:44 -- host/discovery.sh@83 -- # get_bdev_list 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # sort 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # xargs 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:26.005 04:00:44 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # sort 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # xargs 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:26.005 04:00:44 -- host/discovery.sh@87 -- # get_bdev_list 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # sort 00:28:26.005 04:00:44 -- host/discovery.sh@55 -- # xargs 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:26.005 04:00:44 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 [2024-07-14 04:00:44.908855] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.005 04:00:44 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.005 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.005 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # sort 00:28:26.005 04:00:44 -- host/discovery.sh@59 -- # xargs 00:28:26.005 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.264 04:00:44 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:26.264 04:00:44 -- host/discovery.sh@93 -- # get_bdev_list 00:28:26.264 04:00:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.264 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.264 04:00:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.264 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.264 04:00:44 -- host/discovery.sh@55 -- # sort 00:28:26.264 04:00:44 -- host/discovery.sh@55 -- # xargs 00:28:26.264 04:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.264 04:00:44 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:26.264 04:00:44 -- host/discovery.sh@94 -- # get_notification_count 00:28:26.264 04:00:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:26.264 04:00:44 -- host/discovery.sh@74 -- # jq '. | length' 00:28:26.264 04:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.264 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:28:26.264 04:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.264 04:00:45 -- host/discovery.sh@74 -- # notification_count=0 00:28:26.264 04:00:45 -- host/discovery.sh@75 -- # notify_id=0 00:28:26.264 04:00:45 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:26.264 04:00:45 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:26.264 04:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.264 04:00:45 -- common/autotest_common.sh@10 -- # set +x 00:28:26.264 04:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.264 04:00:45 -- host/discovery.sh@100 -- # sleep 1 00:28:26.834 [2024-07-14 04:00:45.689081] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:26.834 [2024-07-14 04:00:45.689109] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:26.834 [2024-07-14 04:00:45.689133] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:27.091 [2024-07-14 04:00:45.775442] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:27.091 [2024-07-14 04:00:45.959531] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:27.091 [2024-07-14 04:00:45.959560] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:27.349 04:00:46 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:27.349 04:00:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:27.349 04:00:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.349 04:00:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:27.349 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:28:27.349 04:00:46 -- host/discovery.sh@59 -- # sort 00:28:27.349 04:00:46 -- host/discovery.sh@59 -- # xargs 00:28:27.349 04:00:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@102 -- # get_bdev_list 00:28:27.349 04:00:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.349 04:00:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.349 04:00:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:27.349 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:28:27.349 04:00:46 -- host/discovery.sh@55 -- # sort 00:28:27.349 04:00:46 -- host/discovery.sh@55 -- # xargs 00:28:27.349 04:00:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:27.349 04:00:46 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:27.349 04:00:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.349 04:00:46 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:27.349 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:28:27.349 04:00:46 -- host/discovery.sh@63 -- # sort -n 00:28:27.349 04:00:46 -- host/discovery.sh@63 -- # xargs 00:28:27.349 04:00:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@104 -- # get_notification_count 00:28:27.349 04:00:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:27.349 04:00:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.349 04:00:46 -- host/discovery.sh@74 -- # jq '. | length' 00:28:27.349 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:28:27.349 04:00:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@74 -- # notification_count=1 00:28:27.349 04:00:46 -- host/discovery.sh@75 -- # notify_id=1 00:28:27.349 04:00:46 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:27.349 04:00:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.349 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:28:27.349 04:00:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.349 04:00:46 -- host/discovery.sh@109 -- # sleep 1 00:28:28.286 04:00:47 -- host/discovery.sh@110 -- # get_bdev_list 00:28:28.286 04:00:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.286 04:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.286 04:00:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:28.286 04:00:47 -- common/autotest_common.sh@10 -- # set +x 00:28:28.286 04:00:47 -- host/discovery.sh@55 -- # sort 00:28:28.286 04:00:47 -- host/discovery.sh@55 -- # xargs 00:28:28.546 04:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.546 04:00:47 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:28.546 04:00:47 -- host/discovery.sh@111 -- # get_notification_count 00:28:28.546 04:00:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:28.546 04:00:47 -- host/discovery.sh@74 -- # jq '. | length' 00:28:28.546 04:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.546 04:00:47 -- common/autotest_common.sh@10 -- # set +x 00:28:28.546 04:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.546 04:00:47 -- host/discovery.sh@74 -- # notification_count=1 00:28:28.546 04:00:47 -- host/discovery.sh@75 -- # notify_id=2 00:28:28.546 04:00:47 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:28.546 04:00:47 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:28.546 04:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.546 04:00:47 -- common/autotest_common.sh@10 -- # set +x 00:28:28.546 [2024-07-14 04:00:47.307933] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:28.546 [2024-07-14 04:00:47.309074] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:28.546 [2024-07-14 04:00:47.309124] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:28.546 04:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.546 04:00:47 -- host/discovery.sh@117 -- # sleep 1 00:28:28.546 [2024-07-14 04:00:47.435565] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:28.805 [2024-07-14 04:00:47.574414] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:28.805 [2024-07-14 04:00:47.574441] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:28.805 [2024-07-14 04:00:47.574452] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:29.742 04:00:48 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:29.742 04:00:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:29.742 04:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.742 04:00:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:29.742 04:00:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.742 04:00:48 -- host/discovery.sh@59 -- # sort 00:28:29.742 04:00:48 -- host/discovery.sh@59 -- # xargs 00:28:29.742 04:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@119 -- # get_bdev_list 00:28:29.742 04:00:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.742 04:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.742 04:00:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:29.742 04:00:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.742 04:00:48 -- host/discovery.sh@55 -- # sort 00:28:29.742 04:00:48 -- host/discovery.sh@55 -- # xargs 00:28:29.742 04:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:29.742 04:00:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:29.742 04:00:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:29.742 04:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.742 04:00:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.742 04:00:48 -- host/discovery.sh@63 -- # sort -n 00:28:29.742 04:00:48 -- host/discovery.sh@63 -- # xargs 00:28:29.742 04:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@121 -- # get_notification_count 00:28:29.742 04:00:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:29.742 04:00:48 -- host/discovery.sh@74 -- # jq '. | length' 00:28:29.742 04:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.742 04:00:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.742 04:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@74 -- # notification_count=0 00:28:29.742 04:00:48 -- host/discovery.sh@75 -- # notify_id=2 00:28:29.742 04:00:48 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.742 04:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.742 04:00:48 -- common/autotest_common.sh@10 -- # set +x 00:28:29.742 [2024-07-14 04:00:48.480356] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:29.742 [2024-07-14 04:00:48.480390] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:29.742 [2024-07-14 04:00:48.482943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.742 [2024-07-14 04:00:48.482973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.742 [2024-07-14 04:00:48.482990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.742 [2024-07-14 04:00:48.483003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.742 [2024-07-14 04:00:48.483017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.742 [2024-07-14 04:00:48.483030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.742 [2024-07-14 04:00:48.483044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.742 [2024-07-14 04:00:48.483057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.742 [2024-07-14 04:00:48.483071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.742 04:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.742 04:00:48 -- host/discovery.sh@127 -- # sleep 1 00:28:29.742 [2024-07-14 04:00:48.492931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.742 [2024-07-14 04:00:48.502979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.742 [2024-07-14 04:00:48.503223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-07-14 04:00:48.503468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.742 [2024-07-14 04:00:48.503497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119b60 with addr=10.0.0.2, port=4420 00:28:29.742 [2024-07-14 04:00:48.503516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.742 [2024-07-14 04:00:48.503541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.742 [2024-07-14 04:00:48.503592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.743 [2024-07-14 04:00:48.503614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:29.743 [2024-07-14 04:00:48.503631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.743 [2024-07-14 04:00:48.503655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.743 [2024-07-14 04:00:48.513064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.743 [2024-07-14 04:00:48.513316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.513520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.513548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119b60 with addr=10.0.0.2, port=4420 00:28:29.743 [2024-07-14 04:00:48.513572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.743 [2024-07-14 04:00:48.513597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.743 [2024-07-14 04:00:48.513634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.743 [2024-07-14 04:00:48.513653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:29.743 [2024-07-14 04:00:48.513668] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.743 [2024-07-14 04:00:48.513690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.743 [2024-07-14 04:00:48.523148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.743 [2024-07-14 04:00:48.523412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.523597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.523622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119b60 with addr=10.0.0.2, port=4420 00:28:29.743 [2024-07-14 04:00:48.523637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.743 [2024-07-14 04:00:48.523659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.743 [2024-07-14 04:00:48.523729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.743 [2024-07-14 04:00:48.523750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:29.743 [2024-07-14 04:00:48.523764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.743 [2024-07-14 04:00:48.523797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.743 [2024-07-14 04:00:48.533222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.743 [2024-07-14 04:00:48.533484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.533694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.533720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119b60 with addr=10.0.0.2, port=4420 00:28:29.743 [2024-07-14 04:00:48.533735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.743 [2024-07-14 04:00:48.533757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.743 [2024-07-14 04:00:48.533790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.743 [2024-07-14 04:00:48.533807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:29.743 [2024-07-14 04:00:48.533820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.743 [2024-07-14 04:00:48.533839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.743 [2024-07-14 04:00:48.543302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.743 [2024-07-14 04:00:48.543564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.543768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.543796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119b60 with addr=10.0.0.2, port=4420 00:28:29.743 [2024-07-14 04:00:48.543814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.743 [2024-07-14 04:00:48.543844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.743 [2024-07-14 04:00:48.543920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.743 [2024-07-14 04:00:48.543940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:29.743 [2024-07-14 04:00:48.543954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.743 [2024-07-14 04:00:48.543973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.743 [2024-07-14 04:00:48.553380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.743 [2024-07-14 04:00:48.553599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.553827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.553856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119b60 with addr=10.0.0.2, port=4420 00:28:29.743 [2024-07-14 04:00:48.553884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.743 [2024-07-14 04:00:48.553923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.743 [2024-07-14 04:00:48.553956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.743 [2024-07-14 04:00:48.553973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:29.743 [2024-07-14 04:00:48.553987] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.743 [2024-07-14 04:00:48.554006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.743 [2024-07-14 04:00:48.563459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:29.743 [2024-07-14 04:00:48.563707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.563914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.743 [2024-07-14 04:00:48.563949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2119b60 with addr=10.0.0.2, port=4420 00:28:29.743 [2024-07-14 04:00:48.563966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119b60 is same with the state(5) to be set 00:28:29.743 [2024-07-14 04:00:48.563988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2119b60 (9): Bad file descriptor 00:28:29.743 [2024-07-14 04:00:48.564031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.743 [2024-07-14 04:00:48.564050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:29.743 [2024-07-14 04:00:48.564064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.743 [2024-07-14 04:00:48.564083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.743 [2024-07-14 04:00:48.569037] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:29.743 [2024-07-14 04:00:48.569065] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:30.692 04:00:49 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:30.692 04:00:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:30.692 04:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.692 04:00:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:30.692 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.692 04:00:49 -- host/discovery.sh@59 -- # sort 00:28:30.692 04:00:49 -- host/discovery.sh@59 -- # xargs 00:28:30.692 04:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.692 04:00:49 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.692 04:00:49 -- host/discovery.sh@129 -- # get_bdev_list 00:28:30.692 04:00:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.692 04:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.692 04:00:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:30.692 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.692 04:00:49 -- host/discovery.sh@55 -- # sort 00:28:30.692 04:00:49 -- host/discovery.sh@55 -- # xargs 00:28:30.692 04:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.692 04:00:49 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:30.692 04:00:49 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:30.692 04:00:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:30.692 04:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.692 04:00:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:30.692 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.692 04:00:49 -- host/discovery.sh@63 -- # sort -n 00:28:30.692 04:00:49 -- host/discovery.sh@63 -- # xargs 00:28:30.692 04:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.692 04:00:49 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:30.692 04:00:49 -- host/discovery.sh@131 -- # get_notification_count 00:28:30.692 04:00:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:30.692 04:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.692 04:00:49 -- host/discovery.sh@74 -- # jq '. | length' 00:28:30.692 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.692 04:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.951 04:00:49 -- host/discovery.sh@74 -- # notification_count=0 00:28:30.951 04:00:49 -- host/discovery.sh@75 -- # notify_id=2 00:28:30.951 04:00:49 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:30.951 04:00:49 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:30.951 04:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.951 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:28:30.951 04:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.951 04:00:49 -- host/discovery.sh@135 -- # sleep 1 00:28:31.948 04:00:50 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:31.948 04:00:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.948 04:00:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.948 04:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.948 04:00:50 -- common/autotest_common.sh@10 -- # set +x 00:28:31.948 04:00:50 -- host/discovery.sh@59 -- # sort 00:28:31.948 04:00:50 -- host/discovery.sh@59 -- # xargs 00:28:31.948 04:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.948 04:00:50 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:31.948 04:00:50 -- host/discovery.sh@137 -- # get_bdev_list 00:28:31.948 04:00:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.948 04:00:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.948 04:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.948 04:00:50 -- common/autotest_common.sh@10 -- # set +x 00:28:31.948 04:00:50 -- host/discovery.sh@55 -- # sort 00:28:31.948 04:00:50 -- host/discovery.sh@55 -- # xargs 00:28:31.948 04:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.948 04:00:50 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:31.948 04:00:50 -- host/discovery.sh@138 -- # get_notification_count 00:28:31.948 04:00:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:31.948 04:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.948 04:00:50 -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.948 04:00:50 -- common/autotest_common.sh@10 -- # set +x 00:28:31.948 04:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.948 04:00:50 -- host/discovery.sh@74 -- # notification_count=2 00:28:31.948 04:00:50 -- host/discovery.sh@75 -- # notify_id=4 00:28:31.948 04:00:50 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:31.949 04:00:50 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:31.949 04:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.949 04:00:50 -- common/autotest_common.sh@10 -- # set +x 00:28:33.325 [2024-07-14 04:00:51.846249] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:33.325 [2024-07-14 04:00:51.846279] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:33.325 [2024-07-14 04:00:51.846305] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:33.325 [2024-07-14 04:00:51.972734] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:33.325 [2024-07-14 04:00:52.079051] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:33.325 [2024-07-14 04:00:52.079086] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:33.325 04:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.325 04:00:52 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:33.325 04:00:52 -- common/autotest_common.sh@640 -- # local es=0 00:28:33.325 04:00:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:33.325 04:00:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:33.325 04:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:33.325 04:00:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:33.325 04:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:33.325 04:00:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:33.325 04:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.325 04:00:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.325 request: 00:28:33.325 { 00:28:33.325 "name": "nvme", 00:28:33.325 "trtype": "tcp", 00:28:33.325 "traddr": "10.0.0.2", 00:28:33.325 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:33.325 "adrfam": "ipv4", 00:28:33.325 "trsvcid": "8009", 00:28:33.325 "wait_for_attach": true, 00:28:33.325 "method": "bdev_nvme_start_discovery", 00:28:33.325 "req_id": 1 00:28:33.326 } 00:28:33.326 Got JSON-RPC error response 00:28:33.326 response: 00:28:33.326 { 00:28:33.326 "code": -17, 00:28:33.326 "message": "File exists" 00:28:33.326 } 00:28:33.326 04:00:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:33.326 04:00:52 -- common/autotest_common.sh@643 -- # es=1 00:28:33.326 04:00:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:33.326 04:00:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:33.326 04:00:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:33.326 04:00:52 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:33.326 04:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.326 04:00:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # sort 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # xargs 00:28:33.326 04:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.326 04:00:52 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:33.326 04:00:52 -- host/discovery.sh@147 -- # get_bdev_list 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:33.326 04:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.326 04:00:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # sort 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # xargs 00:28:33.326 04:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.326 04:00:52 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:33.326 04:00:52 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:33.326 04:00:52 -- common/autotest_common.sh@640 -- # local es=0 00:28:33.326 04:00:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:33.326 04:00:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:33.326 04:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:33.326 04:00:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:33.326 04:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:33.326 04:00:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:33.326 04:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.326 04:00:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 request: 00:28:33.326 { 00:28:33.326 "name": "nvme_second", 00:28:33.326 "trtype": "tcp", 00:28:33.326 "traddr": "10.0.0.2", 00:28:33.326 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:33.326 "adrfam": "ipv4", 00:28:33.326 "trsvcid": "8009", 00:28:33.326 "wait_for_attach": true, 00:28:33.326 "method": "bdev_nvme_start_discovery", 00:28:33.326 "req_id": 1 00:28:33.326 } 00:28:33.326 Got JSON-RPC error response 00:28:33.326 response: 00:28:33.326 { 00:28:33.326 "code": -17, 00:28:33.326 "message": "File exists" 00:28:33.326 } 00:28:33.326 04:00:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:33.326 04:00:52 -- common/autotest_common.sh@643 -- # es=1 00:28:33.326 04:00:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:33.326 04:00:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:33.326 04:00:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:33.326 04:00:52 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:33.326 04:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.326 04:00:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # sort 00:28:33.326 04:00:52 -- host/discovery.sh@67 -- # xargs 00:28:33.326 04:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.326 04:00:52 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:33.326 04:00:52 -- host/discovery.sh@153 -- # get_bdev_list 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.326 04:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:33.326 04:00:52 -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # sort 00:28:33.326 04:00:52 -- host/discovery.sh@55 -- # xargs 00:28:33.326 04:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.586 04:00:52 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:33.586 04:00:52 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:33.586 04:00:52 -- common/autotest_common.sh@640 -- # local es=0 00:28:33.586 04:00:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:33.586 04:00:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:33.586 04:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:33.586 04:00:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:33.586 04:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:33.586 04:00:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:33.586 04:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.586 04:00:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.533 [2024-07-14 04:00:53.282555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.533 [2024-07-14 04:00:53.282770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.533 [2024-07-14 04:00:53.282796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ed790 with addr=10.0.0.2, port=8010 00:28:34.533 [2024-07-14 04:00:53.282826] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:34.533 [2024-07-14 04:00:53.282842] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:34.533 [2024-07-14 04:00:53.282855] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:35.470 [2024-07-14 04:00:54.285005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.470 [2024-07-14 04:00:54.285297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.470 [2024-07-14 04:00:54.285334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ed120 with addr=10.0.0.2, port=8010 00:28:35.470 [2024-07-14 04:00:54.285367] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:35.470 [2024-07-14 04:00:54.285384] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:35.470 [2024-07-14 04:00:54.285399] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:36.406 [2024-07-14 04:00:55.287127] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:36.406 request: 00:28:36.406 { 00:28:36.406 "name": "nvme_second", 00:28:36.406 "trtype": "tcp", 00:28:36.406 "traddr": "10.0.0.2", 00:28:36.406 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:36.406 "adrfam": "ipv4", 00:28:36.406 "trsvcid": "8010", 00:28:36.406 "attach_timeout_ms": 3000, 00:28:36.406 "method": "bdev_nvme_start_discovery", 00:28:36.406 "req_id": 1 00:28:36.406 } 00:28:36.406 Got JSON-RPC error response 00:28:36.406 response: 00:28:36.406 { 00:28:36.406 "code": -110, 00:28:36.406 "message": "Connection timed out" 00:28:36.406 } 00:28:36.406 04:00:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:36.406 04:00:55 -- common/autotest_common.sh@643 -- # es=1 00:28:36.406 04:00:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:36.406 04:00:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:36.406 04:00:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:36.406 04:00:55 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:36.406 04:00:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:36.406 04:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.406 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:28:36.406 04:00:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:36.406 04:00:55 -- host/discovery.sh@67 -- # sort 00:28:36.406 04:00:55 -- host/discovery.sh@67 -- # xargs 00:28:36.406 04:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.406 04:00:55 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:36.406 04:00:55 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:36.406 04:00:55 -- host/discovery.sh@162 -- # kill 2493498 00:28:36.406 04:00:55 -- host/discovery.sh@163 -- # nvmftestfini 00:28:36.406 04:00:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:36.406 04:00:55 -- nvmf/common.sh@116 -- # sync 00:28:36.406 04:00:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:36.406 04:00:55 -- nvmf/common.sh@119 -- # set +e 00:28:36.406 04:00:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:36.406 04:00:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:36.406 rmmod nvme_tcp 00:28:36.664 rmmod nvme_fabrics 00:28:36.664 rmmod nvme_keyring 00:28:36.664 04:00:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:36.664 04:00:55 -- nvmf/common.sh@123 -- # set -e 00:28:36.664 04:00:55 -- nvmf/common.sh@124 -- # return 0 00:28:36.664 04:00:55 -- nvmf/common.sh@477 -- # '[' -n 2493403 ']' 00:28:36.664 04:00:55 -- nvmf/common.sh@478 -- # killprocess 2493403 00:28:36.664 04:00:55 -- common/autotest_common.sh@926 -- # '[' -z 2493403 ']' 00:28:36.664 04:00:55 -- common/autotest_common.sh@930 -- # kill -0 2493403 00:28:36.664 04:00:55 -- common/autotest_common.sh@931 -- # uname 00:28:36.664 04:00:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:36.664 04:00:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2493403 00:28:36.664 04:00:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:36.664 04:00:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:36.664 04:00:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2493403' 00:28:36.664 killing process with pid 2493403 00:28:36.664 04:00:55 -- common/autotest_common.sh@945 -- # kill 2493403 00:28:36.664 04:00:55 -- common/autotest_common.sh@950 -- # wait 2493403 00:28:36.922 04:00:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:36.922 04:00:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:36.922 04:00:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:36.922 04:00:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:36.922 04:00:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:36.922 04:00:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.922 04:00:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.922 04:00:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.828 04:00:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:38.828 00:28:38.828 real 0m17.327s 00:28:38.828 user 0m26.977s 00:28:38.828 sys 0m2.877s 00:28:38.828 04:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.828 04:00:57 -- common/autotest_common.sh@10 -- # set +x 00:28:38.828 ************************************ 00:28:38.828 END TEST nvmf_discovery 00:28:38.828 ************************************ 00:28:38.828 04:00:57 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:38.828 04:00:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:38.828 04:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:38.828 04:00:57 -- common/autotest_common.sh@10 -- # set +x 00:28:38.828 ************************************ 00:28:38.828 START TEST nvmf_discovery_remove_ifc 00:28:38.828 ************************************ 00:28:38.828 04:00:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:38.828 * Looking for test storage... 00:28:39.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.087 04:00:57 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.087 04:00:57 -- nvmf/common.sh@7 -- # uname -s 00:28:39.087 04:00:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.087 04:00:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.087 04:00:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.087 04:00:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.088 04:00:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.088 04:00:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.088 04:00:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.088 04:00:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.088 04:00:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.088 04:00:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.088 04:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.088 04:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.088 04:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.088 04:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.088 04:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.088 04:00:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.088 04:00:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.088 04:00:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.088 04:00:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.088 04:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.088 04:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.088 04:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.088 04:00:57 -- paths/export.sh@5 -- # export PATH 00:28:39.088 04:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.088 04:00:57 -- nvmf/common.sh@46 -- # : 0 00:28:39.088 04:00:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:39.088 04:00:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:39.088 04:00:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:39.088 04:00:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.088 04:00:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.088 04:00:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:39.088 04:00:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:39.088 04:00:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:39.088 04:00:57 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:39.088 04:00:57 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:39.088 04:00:57 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:39.088 04:00:57 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:39.088 04:00:57 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:39.088 04:00:57 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:39.088 04:00:57 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:39.088 04:00:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:39.088 04:00:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.088 04:00:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:39.088 04:00:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:39.088 04:00:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:39.088 04:00:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.088 04:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.088 04:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.088 04:00:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:39.088 04:00:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:39.088 04:00:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:39.088 04:00:57 -- common/autotest_common.sh@10 -- # set +x 00:28:40.993 04:00:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:40.993 04:00:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:40.993 04:00:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:40.993 04:00:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:40.993 04:00:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:40.993 04:00:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:40.993 04:00:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:40.993 04:00:59 -- nvmf/common.sh@294 -- # net_devs=() 00:28:40.993 04:00:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:40.993 04:00:59 -- nvmf/common.sh@295 -- # e810=() 00:28:40.993 04:00:59 -- nvmf/common.sh@295 -- # local -ga e810 00:28:40.993 04:00:59 -- nvmf/common.sh@296 -- # x722=() 00:28:40.993 04:00:59 -- nvmf/common.sh@296 -- # local -ga x722 00:28:40.993 04:00:59 -- nvmf/common.sh@297 -- # mlx=() 00:28:40.993 04:00:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:40.993 04:00:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.993 04:00:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:40.993 04:00:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:40.993 04:00:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:40.993 04:00:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:40.993 04:00:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:40.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:40.993 04:00:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:40.993 04:00:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:40.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:40.993 04:00:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:40.993 04:00:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:40.993 04:00:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.993 04:00:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:40.993 04:00:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.993 04:00:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:40.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:40.993 04:00:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.993 04:00:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:40.993 04:00:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.993 04:00:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:40.993 04:00:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.993 04:00:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:40.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:40.993 04:00:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.993 04:00:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:40.993 04:00:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:40.993 04:00:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:40.993 04:00:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:40.993 04:00:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.993 04:00:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.993 04:00:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.993 04:00:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:40.993 04:00:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.993 04:00:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.993 04:00:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:40.993 04:00:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.993 04:00:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.993 04:00:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:40.993 04:00:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:40.993 04:00:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.993 04:00:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.993 04:00:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.993 04:00:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.993 04:00:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:40.993 04:00:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.251 04:00:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.251 04:00:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.251 04:00:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:41.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:28:41.251 00:28:41.251 --- 10.0.0.2 ping statistics --- 00:28:41.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.251 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:28:41.251 04:00:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:28:41.251 00:28:41.251 --- 10.0.0.1 ping statistics --- 00:28:41.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.251 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:28:41.251 04:00:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.251 04:00:59 -- nvmf/common.sh@410 -- # return 0 00:28:41.251 04:00:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:41.251 04:00:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.251 04:00:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:41.251 04:00:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:41.251 04:00:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.251 04:00:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:41.251 04:00:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:41.251 04:01:00 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:41.251 04:01:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:41.251 04:01:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:41.251 04:01:00 -- common/autotest_common.sh@10 -- # set +x 00:28:41.251 04:01:00 -- nvmf/common.sh@469 -- # nvmfpid=2497044 00:28:41.252 04:01:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:41.252 04:01:00 -- nvmf/common.sh@470 -- # waitforlisten 2497044 00:28:41.252 04:01:00 -- common/autotest_common.sh@819 -- # '[' -z 2497044 ']' 00:28:41.252 04:01:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.252 04:01:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:41.252 04:01:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.252 04:01:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:41.252 04:01:00 -- common/autotest_common.sh@10 -- # set +x 00:28:41.252 [2024-07-14 04:01:00.071301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:41.252 [2024-07-14 04:01:00.071391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.252 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.252 [2024-07-14 04:01:00.143915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.510 [2024-07-14 04:01:00.232416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:41.510 [2024-07-14 04:01:00.232596] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.510 [2024-07-14 04:01:00.232615] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.510 [2024-07-14 04:01:00.232630] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.510 [2024-07-14 04:01:00.232662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.076 04:01:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:42.077 04:01:00 -- common/autotest_common.sh@852 -- # return 0 00:28:42.077 04:01:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:42.077 04:01:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:42.077 04:01:00 -- common/autotest_common.sh@10 -- # set +x 00:28:42.077 04:01:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.077 04:01:01 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:42.077 04:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.077 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:28:42.335 [2024-07-14 04:01:01.019545] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.335 [2024-07-14 04:01:01.027682] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:42.335 null0 00:28:42.335 [2024-07-14 04:01:01.059680] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.335 04:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.335 04:01:01 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2497199 00:28:42.335 04:01:01 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:42.335 04:01:01 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2497199 /tmp/host.sock 00:28:42.335 04:01:01 -- common/autotest_common.sh@819 -- # '[' -z 2497199 ']' 00:28:42.335 04:01:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:42.335 04:01:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:42.335 04:01:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:42.335 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:42.335 04:01:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:42.335 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:28:42.335 [2024-07-14 04:01:01.121041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:42.335 [2024-07-14 04:01:01.121108] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497199 ] 00:28:42.335 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.335 [2024-07-14 04:01:01.181509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.335 [2024-07-14 04:01:01.273052] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:42.335 [2024-07-14 04:01:01.273222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.595 04:01:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:42.595 04:01:01 -- common/autotest_common.sh@852 -- # return 0 00:28:42.595 04:01:01 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:42.595 04:01:01 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:42.595 04:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.595 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:28:42.595 04:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.595 04:01:01 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:42.595 04:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.595 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:28:42.595 04:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.595 04:01:01 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:42.595 04:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.595 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:28:43.971 [2024-07-14 04:01:02.479067] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:43.971 [2024-07-14 04:01:02.479104] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:43.971 [2024-07-14 04:01:02.479130] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:43.971 [2024-07-14 04:01:02.565414] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:43.971 [2024-07-14 04:01:02.628027] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:43.971 [2024-07-14 04:01:02.628082] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:43.971 [2024-07-14 04:01:02.628121] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:43.971 [2024-07-14 04:01:02.628166] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:43.971 [2024-07-14 04:01:02.628214] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:43.971 04:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.971 04:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.971 04:01:02 -- common/autotest_common.sh@10 -- # set +x 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.971 [2024-07-14 04:01:02.635882] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d983f0 was disconnected and freed. delete nvme_qpair. 00:28:43.971 04:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.971 04:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.971 04:01:02 -- common/autotest_common.sh@10 -- # set +x 00:28:43.971 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.972 04:01:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.972 04:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.972 04:01:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:43.972 04:01:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:44.906 04:01:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.906 04:01:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.906 04:01:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.906 04:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.906 04:01:03 -- common/autotest_common.sh@10 -- # set +x 00:28:44.906 04:01:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.906 04:01:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.906 04:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.906 04:01:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:44.906 04:01:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:46.279 04:01:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:46.279 04:01:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:46.279 04:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.279 04:01:04 -- common/autotest_common.sh@10 -- # set +x 00:28:46.279 04:01:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:46.279 04:01:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:46.279 04:01:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:46.279 04:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.279 04:01:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:46.279 04:01:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:47.216 04:01:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:47.216 04:01:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:47.216 04:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.216 04:01:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:47.216 04:01:05 -- common/autotest_common.sh@10 -- # set +x 00:28:47.216 04:01:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:47.216 04:01:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:47.216 04:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.216 04:01:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:47.216 04:01:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:48.181 04:01:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:48.181 04:01:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.181 04:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.181 04:01:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:48.181 04:01:06 -- common/autotest_common.sh@10 -- # set +x 00:28:48.181 04:01:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:48.181 04:01:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:48.181 04:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.181 04:01:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:48.181 04:01:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:49.122 04:01:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:49.122 04:01:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.122 04:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.122 04:01:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:49.122 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:28:49.122 04:01:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:49.122 04:01:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:49.122 04:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.122 04:01:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:49.122 04:01:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:49.382 [2024-07-14 04:01:08.069130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:49.382 [2024-07-14 04:01:08.069193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.382 [2024-07-14 04:01:08.069229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.382 [2024-07-14 04:01:08.069246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.382 [2024-07-14 04:01:08.069260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.382 [2024-07-14 04:01:08.069290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.382 [2024-07-14 04:01:08.069307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.382 [2024-07-14 04:01:08.069323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.382 [2024-07-14 04:01:08.069339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.382 [2024-07-14 04:01:08.069355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.382 [2024-07-14 04:01:08.069371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.382 [2024-07-14 04:01:08.069387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e850 is same with the state(5) to be set 00:28:49.382 [2024-07-14 04:01:08.079162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5e850 (9): Bad file descriptor 00:28:49.382 [2024-07-14 04:01:08.089198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:50.319 04:01:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:50.319 04:01:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:50.319 04:01:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.319 04:01:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:50.319 04:01:08 -- common/autotest_common.sh@10 -- # set +x 00:28:50.319 04:01:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:50.319 04:01:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:50.319 [2024-07-14 04:01:09.149904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:51.256 [2024-07-14 04:01:10.173905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:51.256 [2024-07-14 04:01:10.173977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5e850 with addr=10.0.0.2, port=4420 00:28:51.256 [2024-07-14 04:01:10.174017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5e850 is same with the state(5) to be set 00:28:51.256 [2024-07-14 04:01:10.174058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:51.256 [2024-07-14 04:01:10.174079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:51.256 [2024-07-14 04:01:10.174094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:51.256 [2024-07-14 04:01:10.174113] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:51.256 [2024-07-14 04:01:10.174574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5e850 (9): Bad file descriptor 00:28:51.256 [2024-07-14 04:01:10.174621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.256 [2024-07-14 04:01:10.174668] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:51.256 [2024-07-14 04:01:10.174711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.256 [2024-07-14 04:01:10.174735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.256 [2024-07-14 04:01:10.174758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.256 [2024-07-14 04:01:10.174774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.256 [2024-07-14 04:01:10.174790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.256 [2024-07-14 04:01:10.174805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.256 [2024-07-14 04:01:10.174821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.256 [2024-07-14 04:01:10.174837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.256 [2024-07-14 04:01:10.174853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.256 [2024-07-14 04:01:10.174889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.256 [2024-07-14 04:01:10.174925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:51.256 [2024-07-14 04:01:10.175054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5ec60 (9): Bad file descriptor 00:28:51.256 [2024-07-14 04:01:10.176074] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:51.256 [2024-07-14 04:01:10.176097] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:51.256 04:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.256 04:01:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:51.256 04:01:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.633 04:01:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.633 04:01:11 -- common/autotest_common.sh@10 -- # set +x 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.633 04:01:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.633 04:01:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.633 04:01:11 -- common/autotest_common.sh@10 -- # set +x 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.633 04:01:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:52.633 04:01:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:53.567 [2024-07-14 04:01:12.234817] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:53.567 [2024-07-14 04:01:12.234845] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:53.567 [2024-07-14 04:01:12.234878] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:53.567 04:01:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.567 04:01:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.567 04:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.567 04:01:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.567 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:28:53.567 04:01:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.567 04:01:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.567 04:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.567 [2024-07-14 04:01:12.361325] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:53.567 04:01:12 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:53.567 04:01:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:53.567 [2024-07-14 04:01:12.423136] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:53.567 [2024-07-14 04:01:12.423194] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:53.567 [2024-07-14 04:01:12.423243] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:53.567 [2024-07-14 04:01:12.423268] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:53.567 [2024-07-14 04:01:12.423283] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:53.567 [2024-07-14 04:01:12.432097] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1da2aa0 was disconnected and freed. delete nvme_qpair. 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.500 04:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.500 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.500 04:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:54.500 04:01:13 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2497199 00:28:54.500 04:01:13 -- common/autotest_common.sh@926 -- # '[' -z 2497199 ']' 00:28:54.500 04:01:13 -- common/autotest_common.sh@930 -- # kill -0 2497199 00:28:54.500 04:01:13 -- common/autotest_common.sh@931 -- # uname 00:28:54.500 04:01:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:54.500 04:01:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2497199 00:28:54.757 04:01:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:54.757 04:01:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:54.757 04:01:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2497199' 00:28:54.757 killing process with pid 2497199 00:28:54.757 04:01:13 -- common/autotest_common.sh@945 -- # kill 2497199 00:28:54.757 04:01:13 -- common/autotest_common.sh@950 -- # wait 2497199 00:28:54.757 04:01:13 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:54.757 04:01:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:54.757 04:01:13 -- nvmf/common.sh@116 -- # sync 00:28:54.757 04:01:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:54.757 04:01:13 -- nvmf/common.sh@119 -- # set +e 00:28:54.757 04:01:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:54.757 04:01:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:54.757 rmmod nvme_tcp 00:28:54.757 rmmod nvme_fabrics 00:28:54.757 rmmod nvme_keyring 00:28:55.015 04:01:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:55.015 04:01:13 -- nvmf/common.sh@123 -- # set -e 00:28:55.015 04:01:13 -- nvmf/common.sh@124 -- # return 0 00:28:55.015 04:01:13 -- nvmf/common.sh@477 -- # '[' -n 2497044 ']' 00:28:55.015 04:01:13 -- nvmf/common.sh@478 -- # killprocess 2497044 00:28:55.015 04:01:13 -- common/autotest_common.sh@926 -- # '[' -z 2497044 ']' 00:28:55.015 04:01:13 -- common/autotest_common.sh@930 -- # kill -0 2497044 00:28:55.015 04:01:13 -- common/autotest_common.sh@931 -- # uname 00:28:55.015 04:01:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:55.015 04:01:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2497044 00:28:55.015 04:01:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:55.015 04:01:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:55.015 04:01:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2497044' 00:28:55.015 killing process with pid 2497044 00:28:55.015 04:01:13 -- common/autotest_common.sh@945 -- # kill 2497044 00:28:55.015 04:01:13 -- common/autotest_common.sh@950 -- # wait 2497044 00:28:55.272 04:01:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:55.272 04:01:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:55.272 04:01:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:55.272 04:01:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.272 04:01:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:55.272 04:01:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.272 04:01:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.272 04:01:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.177 04:01:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:57.177 00:28:57.177 real 0m18.289s 00:28:57.177 user 0m25.219s 00:28:57.177 sys 0m3.003s 00:28:57.177 04:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.177 04:01:16 -- common/autotest_common.sh@10 -- # set +x 00:28:57.177 ************************************ 00:28:57.177 END TEST nvmf_discovery_remove_ifc 00:28:57.177 ************************************ 00:28:57.177 04:01:16 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:57.177 04:01:16 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:57.177 04:01:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:57.177 04:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:57.177 04:01:16 -- common/autotest_common.sh@10 -- # set +x 00:28:57.177 ************************************ 00:28:57.177 START TEST nvmf_digest 00:28:57.177 ************************************ 00:28:57.177 04:01:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:57.177 * Looking for test storage... 00:28:57.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:57.177 04:01:16 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.177 04:01:16 -- nvmf/common.sh@7 -- # uname -s 00:28:57.177 04:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.177 04:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.177 04:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.177 04:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.177 04:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.177 04:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.177 04:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.177 04:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.177 04:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.177 04:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.177 04:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.177 04:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:57.177 04:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.177 04:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.177 04:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.177 04:01:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.177 04:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.177 04:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.177 04:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.177 04:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.177 04:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.177 04:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.177 04:01:16 -- paths/export.sh@5 -- # export PATH 00:28:57.177 04:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.177 04:01:16 -- nvmf/common.sh@46 -- # : 0 00:28:57.177 04:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:57.177 04:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:57.177 04:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:57.177 04:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.177 04:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.177 04:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:57.177 04:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:57.177 04:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:57.177 04:01:16 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:57.177 04:01:16 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:57.177 04:01:16 -- host/digest.sh@16 -- # runtime=2 00:28:57.177 04:01:16 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:57.177 04:01:16 -- host/digest.sh@132 -- # nvmftestinit 00:28:57.177 04:01:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:57.177 04:01:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.177 04:01:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:57.177 04:01:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:57.177 04:01:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:57.177 04:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.177 04:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.177 04:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.177 04:01:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:57.177 04:01:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:57.177 04:01:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:57.177 04:01:16 -- common/autotest_common.sh@10 -- # set +x 00:28:59.084 04:01:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:59.084 04:01:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:59.084 04:01:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:59.084 04:01:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:59.084 04:01:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:59.084 04:01:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:59.084 04:01:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:59.084 04:01:17 -- nvmf/common.sh@294 -- # net_devs=() 00:28:59.084 04:01:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:59.084 04:01:17 -- nvmf/common.sh@295 -- # e810=() 00:28:59.084 04:01:17 -- nvmf/common.sh@295 -- # local -ga e810 00:28:59.084 04:01:17 -- nvmf/common.sh@296 -- # x722=() 00:28:59.084 04:01:17 -- nvmf/common.sh@296 -- # local -ga x722 00:28:59.084 04:01:17 -- nvmf/common.sh@297 -- # mlx=() 00:28:59.084 04:01:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:59.084 04:01:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.084 04:01:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:59.084 04:01:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:59.084 04:01:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:59.084 04:01:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:59.084 04:01:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:59.084 04:01:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:59.084 04:01:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:59.084 04:01:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:59.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:59.085 04:01:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:59.085 04:01:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:59.085 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:59.085 04:01:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:59.085 04:01:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:59.085 04:01:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.085 04:01:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:59.085 04:01:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.085 04:01:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:59.085 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:59.085 04:01:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.085 04:01:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:59.085 04:01:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.085 04:01:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:59.085 04:01:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.085 04:01:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:59.085 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:59.085 04:01:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.085 04:01:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:59.085 04:01:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:59.085 04:01:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:59.085 04:01:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:59.085 04:01:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.085 04:01:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.085 04:01:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.085 04:01:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:59.085 04:01:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.085 04:01:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.085 04:01:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:59.085 04:01:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.085 04:01:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.085 04:01:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:59.085 04:01:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:59.085 04:01:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.085 04:01:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.343 04:01:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.343 04:01:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.343 04:01:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:59.343 04:01:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.343 04:01:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.343 04:01:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.343 04:01:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:59.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:28:59.343 00:28:59.343 --- 10.0.0.2 ping statistics --- 00:28:59.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.343 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:59.343 04:01:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:59.343 00:28:59.344 --- 10.0.0.1 ping statistics --- 00:28:59.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.344 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:59.344 04:01:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.344 04:01:18 -- nvmf/common.sh@410 -- # return 0 00:28:59.344 04:01:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:59.344 04:01:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.344 04:01:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:59.344 04:01:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:59.344 04:01:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.344 04:01:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:59.344 04:01:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:59.344 04:01:18 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:59.344 04:01:18 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:28:59.344 04:01:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:59.344 04:01:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.344 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:28:59.344 ************************************ 00:28:59.344 START TEST nvmf_digest_clean 00:28:59.344 ************************************ 00:28:59.344 04:01:18 -- common/autotest_common.sh@1104 -- # run_digest 00:28:59.344 04:01:18 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:28:59.344 04:01:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:59.344 04:01:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:59.344 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:28:59.344 04:01:18 -- nvmf/common.sh@469 -- # nvmfpid=2500719 00:28:59.344 04:01:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:59.344 04:01:18 -- nvmf/common.sh@470 -- # waitforlisten 2500719 00:28:59.344 04:01:18 -- common/autotest_common.sh@819 -- # '[' -z 2500719 ']' 00:28:59.344 04:01:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.344 04:01:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:59.344 04:01:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.344 04:01:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:59.344 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:28:59.344 [2024-07-14 04:01:18.175930] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:59.344 [2024-07-14 04:01:18.176014] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.344 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.344 [2024-07-14 04:01:18.239181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.602 [2024-07-14 04:01:18.326684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:59.602 [2024-07-14 04:01:18.326858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.602 [2024-07-14 04:01:18.326889] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.602 [2024-07-14 04:01:18.326905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.602 [2024-07-14 04:01:18.326949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.602 04:01:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:59.602 04:01:18 -- common/autotest_common.sh@852 -- # return 0 00:28:59.602 04:01:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:59.602 04:01:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:59.602 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:28:59.602 04:01:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.602 04:01:18 -- host/digest.sh@120 -- # common_target_config 00:28:59.602 04:01:18 -- host/digest.sh@43 -- # rpc_cmd 00:28:59.602 04:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.602 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:28:59.602 null0 00:28:59.602 [2024-07-14 04:01:18.507057] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.602 [2024-07-14 04:01:18.531286] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.602 04:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.602 04:01:18 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:28:59.602 04:01:18 -- host/digest.sh@77 -- # local rw bs qd 00:28:59.602 04:01:18 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:59.602 04:01:18 -- host/digest.sh@80 -- # rw=randread 00:28:59.602 04:01:18 -- host/digest.sh@80 -- # bs=4096 00:28:59.602 04:01:18 -- host/digest.sh@80 -- # qd=128 00:28:59.602 04:01:18 -- host/digest.sh@82 -- # bperfpid=2500744 00:28:59.602 04:01:18 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:59.602 04:01:18 -- host/digest.sh@83 -- # waitforlisten 2500744 /var/tmp/bperf.sock 00:28:59.602 04:01:18 -- common/autotest_common.sh@819 -- # '[' -z 2500744 ']' 00:28:59.602 04:01:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.602 04:01:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:59.602 04:01:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.603 04:01:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:59.603 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:28:59.861 [2024-07-14 04:01:18.575638] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:59.861 [2024-07-14 04:01:18.575714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500744 ] 00:28:59.861 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.861 [2024-07-14 04:01:18.638126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.861 [2024-07-14 04:01:18.726814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.120 04:01:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:00.120 04:01:18 -- common/autotest_common.sh@852 -- # return 0 00:29:00.120 04:01:18 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:00.120 04:01:18 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:00.120 04:01:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:00.379 04:01:19 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.379 04:01:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.635 nvme0n1 00:29:00.635 04:01:19 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:00.635 04:01:19 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:00.894 Running I/O for 2 seconds... 00:29:02.797 00:29:02.797 Latency(us) 00:29:02.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.797 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:02.797 nvme0n1 : 2.04 19101.59 74.62 0.00 0.00 6587.87 2475.80 45049.93 00:29:02.797 =================================================================================================================== 00:29:02.797 Total : 19101.59 74.62 0.00 0.00 6587.87 2475.80 45049.93 00:29:02.797 0 00:29:02.797 04:01:21 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:02.797 04:01:21 -- host/digest.sh@92 -- # get_accel_stats 00:29:02.797 04:01:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:02.797 04:01:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:02.797 04:01:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:02.797 | select(.opcode=="crc32c") 00:29:02.797 | "\(.module_name) \(.executed)"' 00:29:03.056 04:01:21 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:03.056 04:01:21 -- host/digest.sh@93 -- # exp_module=software 00:29:03.056 04:01:21 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:03.056 04:01:21 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:03.056 04:01:21 -- host/digest.sh@97 -- # killprocess 2500744 00:29:03.056 04:01:21 -- common/autotest_common.sh@926 -- # '[' -z 2500744 ']' 00:29:03.056 04:01:21 -- common/autotest_common.sh@930 -- # kill -0 2500744 00:29:03.056 04:01:21 -- common/autotest_common.sh@931 -- # uname 00:29:03.056 04:01:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.056 04:01:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2500744 00:29:03.056 04:01:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:03.056 04:01:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:03.056 04:01:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2500744' 00:29:03.056 killing process with pid 2500744 00:29:03.056 04:01:21 -- common/autotest_common.sh@945 -- # kill 2500744 00:29:03.056 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.056 00:29:03.056 Latency(us) 00:29:03.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.056 =================================================================================================================== 00:29:03.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.056 04:01:21 -- common/autotest_common.sh@950 -- # wait 2500744 00:29:03.315 04:01:22 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:03.315 04:01:22 -- host/digest.sh@77 -- # local rw bs qd 00:29:03.315 04:01:22 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:03.315 04:01:22 -- host/digest.sh@80 -- # rw=randread 00:29:03.315 04:01:22 -- host/digest.sh@80 -- # bs=131072 00:29:03.315 04:01:22 -- host/digest.sh@80 -- # qd=16 00:29:03.315 04:01:22 -- host/digest.sh@82 -- # bperfpid=2501167 00:29:03.315 04:01:22 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:03.315 04:01:22 -- host/digest.sh@83 -- # waitforlisten 2501167 /var/tmp/bperf.sock 00:29:03.315 04:01:22 -- common/autotest_common.sh@819 -- # '[' -z 2501167 ']' 00:29:03.315 04:01:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:03.315 04:01:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:03.315 04:01:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:03.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:03.315 04:01:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:03.315 04:01:22 -- common/autotest_common.sh@10 -- # set +x 00:29:03.606 [2024-07-14 04:01:22.259451] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:03.606 [2024-07-14 04:01:22.259522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501167 ] 00:29:03.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.606 Zero copy mechanism will not be used. 00:29:03.606 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.606 [2024-07-14 04:01:22.323842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.606 [2024-07-14 04:01:22.414055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.606 04:01:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:03.606 04:01:22 -- common/autotest_common.sh@852 -- # return 0 00:29:03.606 04:01:22 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:03.606 04:01:22 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:03.606 04:01:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:03.864 04:01:22 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.864 04:01:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.430 nvme0n1 00:29:04.430 04:01:23 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:04.430 04:01:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.430 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.430 Zero copy mechanism will not be used. 00:29:04.430 Running I/O for 2 seconds... 00:29:06.973 00:29:06.973 Latency(us) 00:29:06.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.973 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:06.973 nvme0n1 : 2.01 2576.75 322.09 0.00 0.00 6205.07 5801.15 9514.86 00:29:06.973 =================================================================================================================== 00:29:06.973 Total : 2576.75 322.09 0.00 0.00 6205.07 5801.15 9514.86 00:29:06.973 0 00:29:06.973 04:01:25 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:06.973 04:01:25 -- host/digest.sh@92 -- # get_accel_stats 00:29:06.973 04:01:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:06.973 04:01:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:06.973 04:01:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:06.973 | select(.opcode=="crc32c") 00:29:06.973 | "\(.module_name) \(.executed)"' 00:29:06.973 04:01:25 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:06.974 04:01:25 -- host/digest.sh@93 -- # exp_module=software 00:29:06.974 04:01:25 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:06.974 04:01:25 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:06.974 04:01:25 -- host/digest.sh@97 -- # killprocess 2501167 00:29:06.974 04:01:25 -- common/autotest_common.sh@926 -- # '[' -z 2501167 ']' 00:29:06.974 04:01:25 -- common/autotest_common.sh@930 -- # kill -0 2501167 00:29:06.974 04:01:25 -- common/autotest_common.sh@931 -- # uname 00:29:06.974 04:01:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:06.974 04:01:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2501167 00:29:06.974 04:01:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:06.974 04:01:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:06.974 04:01:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2501167' 00:29:06.974 killing process with pid 2501167 00:29:06.974 04:01:25 -- common/autotest_common.sh@945 -- # kill 2501167 00:29:06.974 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.974 00:29:06.974 Latency(us) 00:29:06.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.974 =================================================================================================================== 00:29:06.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.974 04:01:25 -- common/autotest_common.sh@950 -- # wait 2501167 00:29:06.974 04:01:25 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:06.974 04:01:25 -- host/digest.sh@77 -- # local rw bs qd 00:29:06.974 04:01:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:06.974 04:01:25 -- host/digest.sh@80 -- # rw=randwrite 00:29:06.974 04:01:25 -- host/digest.sh@80 -- # bs=4096 00:29:06.974 04:01:25 -- host/digest.sh@80 -- # qd=128 00:29:06.974 04:01:25 -- host/digest.sh@82 -- # bperfpid=2501704 00:29:06.974 04:01:25 -- host/digest.sh@83 -- # waitforlisten 2501704 /var/tmp/bperf.sock 00:29:06.974 04:01:25 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:06.974 04:01:25 -- common/autotest_common.sh@819 -- # '[' -z 2501704 ']' 00:29:06.974 04:01:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.974 04:01:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:06.974 04:01:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.974 04:01:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:06.974 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:29:06.974 [2024-07-14 04:01:25.901341] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:06.974 [2024-07-14 04:01:25.901419] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501704 ] 00:29:07.231 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.231 [2024-07-14 04:01:25.964081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.231 [2024-07-14 04:01:26.050811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.231 04:01:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:07.231 04:01:26 -- common/autotest_common.sh@852 -- # return 0 00:29:07.231 04:01:26 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:07.231 04:01:26 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:07.231 04:01:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.489 04:01:26 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.489 04:01:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.054 nvme0n1 00:29:08.054 04:01:26 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:08.054 04:01:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.054 Running I/O for 2 seconds... 00:29:09.950 00:29:09.950 Latency(us) 00:29:09.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.950 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.950 nvme0n1 : 2.00 20688.52 80.81 0.00 0.00 6178.87 3203.98 13398.47 00:29:09.950 =================================================================================================================== 00:29:09.950 Total : 20688.52 80.81 0.00 0.00 6178.87 3203.98 13398.47 00:29:09.950 0 00:29:09.950 04:01:28 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:09.950 04:01:28 -- host/digest.sh@92 -- # get_accel_stats 00:29:09.950 04:01:28 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:09.950 04:01:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:09.950 04:01:28 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:09.950 | select(.opcode=="crc32c") 00:29:09.950 | "\(.module_name) \(.executed)"' 00:29:10.207 04:01:29 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:10.207 04:01:29 -- host/digest.sh@93 -- # exp_module=software 00:29:10.207 04:01:29 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:10.207 04:01:29 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:10.207 04:01:29 -- host/digest.sh@97 -- # killprocess 2501704 00:29:10.207 04:01:29 -- common/autotest_common.sh@926 -- # '[' -z 2501704 ']' 00:29:10.207 04:01:29 -- common/autotest_common.sh@930 -- # kill -0 2501704 00:29:10.207 04:01:29 -- common/autotest_common.sh@931 -- # uname 00:29:10.207 04:01:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:10.207 04:01:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2501704 00:29:10.207 04:01:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:10.207 04:01:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:10.207 04:01:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2501704' 00:29:10.207 killing process with pid 2501704 00:29:10.207 04:01:29 -- common/autotest_common.sh@945 -- # kill 2501704 00:29:10.207 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.207 00:29:10.207 Latency(us) 00:29:10.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.207 =================================================================================================================== 00:29:10.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.207 04:01:29 -- common/autotest_common.sh@950 -- # wait 2501704 00:29:10.465 04:01:29 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:10.465 04:01:29 -- host/digest.sh@77 -- # local rw bs qd 00:29:10.465 04:01:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:10.465 04:01:29 -- host/digest.sh@80 -- # rw=randwrite 00:29:10.465 04:01:29 -- host/digest.sh@80 -- # bs=131072 00:29:10.465 04:01:29 -- host/digest.sh@80 -- # qd=16 00:29:10.465 04:01:29 -- host/digest.sh@82 -- # bperfpid=2502126 00:29:10.465 04:01:29 -- host/digest.sh@83 -- # waitforlisten 2502126 /var/tmp/bperf.sock 00:29:10.465 04:01:29 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:10.465 04:01:29 -- common/autotest_common.sh@819 -- # '[' -z 2502126 ']' 00:29:10.465 04:01:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.465 04:01:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:10.465 04:01:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.465 04:01:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:10.465 04:01:29 -- common/autotest_common.sh@10 -- # set +x 00:29:10.465 [2024-07-14 04:01:29.358829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:10.465 [2024-07-14 04:01:29.358936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502126 ] 00:29:10.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.465 Zero copy mechanism will not be used. 00:29:10.465 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.722 [2024-07-14 04:01:29.417652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.722 [2024-07-14 04:01:29.505717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.722 04:01:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:10.723 04:01:29 -- common/autotest_common.sh@852 -- # return 0 00:29:10.723 04:01:29 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:10.723 04:01:29 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:10.723 04:01:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:10.980 04:01:29 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.980 04:01:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.546 nvme0n1 00:29:11.546 04:01:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:11.547 04:01:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.547 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.547 Zero copy mechanism will not be used. 00:29:11.547 Running I/O for 2 seconds... 00:29:14.077 00:29:14.077 Latency(us) 00:29:14.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.077 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:14.077 nvme0n1 : 2.01 1602.03 200.25 0.00 0.00 9955.28 3956.43 13786.83 00:29:14.077 =================================================================================================================== 00:29:14.077 Total : 1602.03 200.25 0.00 0.00 9955.28 3956.43 13786.83 00:29:14.077 0 00:29:14.077 04:01:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:14.077 04:01:32 -- host/digest.sh@92 -- # get_accel_stats 00:29:14.077 04:01:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.077 04:01:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.077 | select(.opcode=="crc32c") 00:29:14.077 | "\(.module_name) \(.executed)"' 00:29:14.077 04:01:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:14.077 04:01:32 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:14.077 04:01:32 -- host/digest.sh@93 -- # exp_module=software 00:29:14.077 04:01:32 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:14.077 04:01:32 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:14.077 04:01:32 -- host/digest.sh@97 -- # killprocess 2502126 00:29:14.077 04:01:32 -- common/autotest_common.sh@926 -- # '[' -z 2502126 ']' 00:29:14.077 04:01:32 -- common/autotest_common.sh@930 -- # kill -0 2502126 00:29:14.077 04:01:32 -- common/autotest_common.sh@931 -- # uname 00:29:14.077 04:01:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.078 04:01:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2502126 00:29:14.078 04:01:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:14.078 04:01:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:14.078 04:01:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2502126' 00:29:14.078 killing process with pid 2502126 00:29:14.078 04:01:32 -- common/autotest_common.sh@945 -- # kill 2502126 00:29:14.078 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.078 00:29:14.078 Latency(us) 00:29:14.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.078 =================================================================================================================== 00:29:14.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.078 04:01:32 -- common/autotest_common.sh@950 -- # wait 2502126 00:29:14.078 04:01:32 -- host/digest.sh@126 -- # killprocess 2500719 00:29:14.078 04:01:32 -- common/autotest_common.sh@926 -- # '[' -z 2500719 ']' 00:29:14.078 04:01:32 -- common/autotest_common.sh@930 -- # kill -0 2500719 00:29:14.078 04:01:32 -- common/autotest_common.sh@931 -- # uname 00:29:14.078 04:01:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.078 04:01:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2500719 00:29:14.078 04:01:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:14.078 04:01:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:14.078 04:01:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2500719' 00:29:14.078 killing process with pid 2500719 00:29:14.078 04:01:32 -- common/autotest_common.sh@945 -- # kill 2500719 00:29:14.078 04:01:32 -- common/autotest_common.sh@950 -- # wait 2500719 00:29:14.337 00:29:14.337 real 0m15.061s 00:29:14.337 user 0m30.163s 00:29:14.337 sys 0m3.840s 00:29:14.337 04:01:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.337 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.337 ************************************ 00:29:14.337 END TEST nvmf_digest_clean 00:29:14.337 ************************************ 00:29:14.337 04:01:33 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:14.337 04:01:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:14.337 04:01:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.337 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.337 ************************************ 00:29:14.337 START TEST nvmf_digest_error 00:29:14.337 ************************************ 00:29:14.337 04:01:33 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:14.337 04:01:33 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:14.337 04:01:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:14.337 04:01:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:14.337 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.337 04:01:33 -- nvmf/common.sh@469 -- # nvmfpid=2502567 00:29:14.337 04:01:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:14.337 04:01:33 -- nvmf/common.sh@470 -- # waitforlisten 2502567 00:29:14.337 04:01:33 -- common/autotest_common.sh@819 -- # '[' -z 2502567 ']' 00:29:14.337 04:01:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.337 04:01:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:14.337 04:01:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.337 04:01:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:14.337 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.337 [2024-07-14 04:01:33.267095] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:14.337 [2024-07-14 04:01:33.267189] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.595 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.595 [2024-07-14 04:01:33.338057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.595 [2024-07-14 04:01:33.424154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:14.595 [2024-07-14 04:01:33.424321] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.595 [2024-07-14 04:01:33.424340] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.595 [2024-07-14 04:01:33.424362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.595 [2024-07-14 04:01:33.424392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.595 04:01:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:14.595 04:01:33 -- common/autotest_common.sh@852 -- # return 0 00:29:14.595 04:01:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:14.595 04:01:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:14.595 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.595 04:01:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.595 04:01:33 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:14.595 04:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.595 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.595 [2024-07-14 04:01:33.496986] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:14.595 04:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.595 04:01:33 -- host/digest.sh@104 -- # common_target_config 00:29:14.595 04:01:33 -- host/digest.sh@43 -- # rpc_cmd 00:29:14.596 04:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.596 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.853 null0 00:29:14.853 [2024-07-14 04:01:33.618275] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.853 [2024-07-14 04:01:33.642471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.853 04:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.853 04:01:33 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:14.853 04:01:33 -- host/digest.sh@54 -- # local rw bs qd 00:29:14.853 04:01:33 -- host/digest.sh@56 -- # rw=randread 00:29:14.853 04:01:33 -- host/digest.sh@56 -- # bs=4096 00:29:14.853 04:01:33 -- host/digest.sh@56 -- # qd=128 00:29:14.853 04:01:33 -- host/digest.sh@58 -- # bperfpid=2502663 00:29:14.853 04:01:33 -- host/digest.sh@60 -- # waitforlisten 2502663 /var/tmp/bperf.sock 00:29:14.853 04:01:33 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:14.853 04:01:33 -- common/autotest_common.sh@819 -- # '[' -z 2502663 ']' 00:29:14.853 04:01:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.853 04:01:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:14.853 04:01:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.853 04:01:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:14.853 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:29:14.853 [2024-07-14 04:01:33.686991] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:14.853 [2024-07-14 04:01:33.687054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502663 ] 00:29:14.853 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.853 [2024-07-14 04:01:33.749894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.110 [2024-07-14 04:01:33.838389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.043 04:01:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:16.043 04:01:34 -- common/autotest_common.sh@852 -- # return 0 00:29:16.043 04:01:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.043 04:01:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.043 04:01:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:16.043 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.043 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:29:16.043 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.043 04:01:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.043 04:01:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.300 nvme0n1 00:29:16.300 04:01:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:16.300 04:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.300 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:29:16.300 04:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.300 04:01:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:16.300 04:01:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.558 Running I/O for 2 seconds... 00:29:16.558 [2024-07-14 04:01:35.343704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.558 [2024-07-14 04:01:35.343756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-07-14 04:01:35.343779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.558 [2024-07-14 04:01:35.358251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.558 [2024-07-14 04:01:35.358293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.558 [2024-07-14 04:01:35.358322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.370548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.370589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.370631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.382803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.382847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.382891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.395468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.395505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.395525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.408631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.408676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.408709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.421026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.421065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.421107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.433422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.433476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.433503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.445804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.445849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.445908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.458452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.458496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.458526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.469686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.469726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.469755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.481944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.482026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.559 [2024-07-14 04:01:35.493883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.559 [2024-07-14 04:01:35.493922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.559 [2024-07-14 04:01:35.493964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.506131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.506183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.506208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.518037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.518075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.518117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.530698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.530737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.530787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.542140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.542193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.542220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.553699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.553753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.553781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.566336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.566389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.566417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.578211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.578265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.578294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.589896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.589927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.589959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.601265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.601302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.601343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.613788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.613829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.613857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.625601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.625640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.625681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.637652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.637687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.637720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.650246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.650298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.650339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.662093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.662132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.662174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.673636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.673689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.673718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.685619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.685650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.685682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.698260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.698299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.698343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.709979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.710017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.710058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.721733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.721765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.721797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.734254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.734307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.734349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.818 [2024-07-14 04:01:35.746380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:16.818 [2024-07-14 04:01:35.746419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.818 [2024-07-14 04:01:35.746462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.758657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.758711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.758737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.771247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.771301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.771329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.783009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.783048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.783090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.795001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.795054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.795084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.807209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.807263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.807290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.819168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.819221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.819247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.831268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.831320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.831347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.842574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.842626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.842658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.855342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.855397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.855424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.867239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.867272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.867305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.878739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.878791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.878833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.890907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.890947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.890990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.902730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.902770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.902813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.914877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.914907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.914939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.927029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.927069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.927113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.938859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.938921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.938948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.950681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.950712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.077 [2024-07-14 04:01:35.950745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.077 [2024-07-14 04:01:35.962300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.077 [2024-07-14 04:01:35.962353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.078 [2024-07-14 04:01:35.962380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.078 [2024-07-14 04:01:35.974737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.078 [2024-07-14 04:01:35.974789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.078 [2024-07-14 04:01:35.974829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.078 [2024-07-14 04:01:35.986508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.078 [2024-07-14 04:01:35.986574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.078 [2024-07-14 04:01:35.986602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.078 [2024-07-14 04:01:35.998248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.078 [2024-07-14 04:01:35.998280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.078 [2024-07-14 04:01:35.998313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.078 [2024-07-14 04:01:36.010034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.078 [2024-07-14 04:01:36.010089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.078 [2024-07-14 04:01:36.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.336 [2024-07-14 04:01:36.022451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.336 [2024-07-14 04:01:36.022490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.336 [2024-07-14 04:01:36.022534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.336 [2024-07-14 04:01:36.034580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.336 [2024-07-14 04:01:36.034632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.336 [2024-07-14 04:01:36.034659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.336 [2024-07-14 04:01:36.046301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.336 [2024-07-14 04:01:36.046331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.046369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.059022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.059074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.059104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.070842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.070903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.070932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.082581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.082620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.082661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.095295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.095348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.095374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.107228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.107266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.107293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.119030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.119062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.119095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.130532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.130563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.130595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.142809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.142846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.142897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.154645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.154703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.154730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.166263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.166301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.166328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.179107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.179147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.179175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.190754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.190807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.190834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.202278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.202329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.202369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.214980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.215019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.215061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.226726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.226765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.226809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.238792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.238830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.238875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.251346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.251384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.251425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.263359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.263398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.263441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.337 [2024-07-14 04:01:36.275271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.337 [2024-07-14 04:01:36.275310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.337 [2024-07-14 04:01:36.275351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.287714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.287747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.287779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.300044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.300095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.300123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.312271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.312318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.312362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.323705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.323744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.323786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.336442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.336495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.336522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.348529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.348568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.348611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.360386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.360416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.360454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.372084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.372115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.372146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.384466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.384504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.384548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.396556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.396610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.596 [2024-07-14 04:01:36.396638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.596 [2024-07-14 04:01:36.408746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.596 [2024-07-14 04:01:36.408797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.408838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.420222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.420274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.420302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.432774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.432827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.432856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.444714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.444752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.444794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.456541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.456592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.456617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.469086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.469127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.469169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.480815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.480874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.480902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.492876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.492913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.492954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.504947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.505000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.505028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.516910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.516964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.516992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.597 [2024-07-14 04:01:36.528238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.597 [2024-07-14 04:01:36.528290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.597 [2024-07-14 04:01:36.528318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.856 [2024-07-14 04:01:36.541292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.856 [2024-07-14 04:01:36.541331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.856 [2024-07-14 04:01:36.541374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.553066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.553106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.553147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.564891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.564944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.564979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.577025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.577064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.577105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.589195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.589247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.589276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.600881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.600920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.600961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.613037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.613068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.613085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.625044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.625096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.625124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.637062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.637116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.637142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.648845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.648882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.648916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.660922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.660975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.661003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.672915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.672972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.673000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.684717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.684756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.684797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.697188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.697227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.697254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.709001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.709040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.709082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.720610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.720648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.720675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.732643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.732673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.732705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.744884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.744931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.744975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.756726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.756779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.756806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.768620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.768673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.768699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.780964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.781016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.781059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.857 [2024-07-14 04:01:36.792653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:17.857 [2024-07-14 04:01:36.792692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.857 [2024-07-14 04:01:36.792734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.804810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.804863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.804914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.817320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.817374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.817402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.829323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.829378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.829405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.841072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.841112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.841154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.853510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.853549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.853590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.865381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.865421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.865463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.876998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.877038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.877088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.888764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.888819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.888847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.901405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.901445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.901488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.913548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.913601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.913628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.925386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.925424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.925465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.937966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.938005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.938047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.950042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.950081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.950124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.962080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.962133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.962161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.973839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.973899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.973931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.985962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.986014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.986044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:36.998250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:36.998290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:36.998333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:37.010061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:37.010101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:37.010154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:37.022224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:37.022255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:37.022288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:37.034273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:37.034314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:37.034341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.116 [2024-07-14 04:01:37.046065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.116 [2024-07-14 04:01:37.046122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.116 [2024-07-14 04:01:37.046152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.058365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.058398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.058432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.070515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.070555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.070599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.082199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.082254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.082281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.094271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.094324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.094344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.106639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.106678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.106718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.118537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.118590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.118632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.130774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.130825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.130852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.142328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.142381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.375 [2024-07-14 04:01:37.142407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.375 [2024-07-14 04:01:37.155028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.375 [2024-07-14 04:01:37.155068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.155111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.166940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.166972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.167005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.178742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.178776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.178811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.191287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.191345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.191371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.203050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.203089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.203117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.214589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.214629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.214671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.227200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.227253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.227297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.239303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.239343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.239371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.251256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.251308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.251334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.263091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.263128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.263169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.275498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.275537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.275580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.287670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.287709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.287752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.299593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.299632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.299675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.376 [2024-07-14 04:01:37.312233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.376 [2024-07-14 04:01:37.312272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.376 [2024-07-14 04:01:37.312316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.635 [2024-07-14 04:01:37.324351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2254f10) 00:29:18.636 [2024-07-14 04:01:37.324395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.636 [2024-07-14 04:01:37.324423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.636 00:29:18.636 Latency(us) 00:29:18.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.636 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:18.636 nvme0n1 : 2.00 21039.36 82.19 0.00 0.00 6075.00 3301.07 21651.15 00:29:18.636 =================================================================================================================== 00:29:18.636 Total : 21039.36 82.19 0.00 0.00 6075.00 3301.07 21651.15 00:29:18.636 0 00:29:18.636 04:01:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:18.636 04:01:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:18.636 04:01:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:18.636 | .driver_specific 00:29:18.636 | .nvme_error 00:29:18.636 | .status_code 00:29:18.636 | .command_transient_transport_error' 00:29:18.636 04:01:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:18.894 04:01:37 -- host/digest.sh@71 -- # (( 165 > 0 )) 00:29:18.894 04:01:37 -- host/digest.sh@73 -- # killprocess 2502663 00:29:18.894 04:01:37 -- common/autotest_common.sh@926 -- # '[' -z 2502663 ']' 00:29:18.894 04:01:37 -- common/autotest_common.sh@930 -- # kill -0 2502663 00:29:18.894 04:01:37 -- common/autotest_common.sh@931 -- # uname 00:29:18.894 04:01:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:18.894 04:01:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2502663 00:29:18.894 04:01:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:18.894 04:01:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:18.894 04:01:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2502663' 00:29:18.894 killing process with pid 2502663 00:29:18.894 04:01:37 -- common/autotest_common.sh@945 -- # kill 2502663 00:29:18.894 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.894 00:29:18.894 Latency(us) 00:29:18.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.894 =================================================================================================================== 00:29:18.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.894 04:01:37 -- common/autotest_common.sh@950 -- # wait 2502663 00:29:19.152 04:01:37 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:19.152 04:01:37 -- host/digest.sh@54 -- # local rw bs qd 00:29:19.152 04:01:37 -- host/digest.sh@56 -- # rw=randread 00:29:19.152 04:01:37 -- host/digest.sh@56 -- # bs=131072 00:29:19.152 04:01:37 -- host/digest.sh@56 -- # qd=16 00:29:19.152 04:01:37 -- host/digest.sh@58 -- # bperfpid=2503147 00:29:19.152 04:01:37 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:19.152 04:01:37 -- host/digest.sh@60 -- # waitforlisten 2503147 /var/tmp/bperf.sock 00:29:19.152 04:01:37 -- common/autotest_common.sh@819 -- # '[' -z 2503147 ']' 00:29:19.152 04:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.152 04:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:19.152 04:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.152 04:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:19.152 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:29:19.152 [2024-07-14 04:01:37.907304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:19.152 [2024-07-14 04:01:37.907384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503147 ] 00:29:19.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.152 Zero copy mechanism will not be used. 00:29:19.152 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.152 [2024-07-14 04:01:37.965597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.152 [2024-07-14 04:01:38.049551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.086 04:01:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:20.086 04:01:38 -- common/autotest_common.sh@852 -- # return 0 00:29:20.086 04:01:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.086 04:01:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.344 04:01:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:20.344 04:01:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.344 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:29:20.344 04:01:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.344 04:01:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.344 04:01:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.601 nvme0n1 00:29:20.601 04:01:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:20.601 04:01:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.601 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:29:20.601 04:01:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.601 04:01:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:20.601 04:01:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.860 Zero copy mechanism will not be used. 00:29:20.860 Running I/O for 2 seconds... 00:29:20.860 [2024-07-14 04:01:39.578255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.578314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.578338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.591540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.591574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.591606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.605117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.605153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.605170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.618545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.618578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.618597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.631673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.631706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.631725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.645061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.645090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.645107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.658489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.658522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.658541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.671737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.671769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.671788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.685032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.685062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.685079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.698257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.698289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.698308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.711413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.711451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.711471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.724583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.724615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.724634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.737987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.738014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.738046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.750980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.751007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.751023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.764386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.764418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.764436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.777537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.777569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.777588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.860 [2024-07-14 04:01:39.790760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:20.860 [2024-07-14 04:01:39.790792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.860 [2024-07-14 04:01:39.790810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.803895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.803942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.803958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.816961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.817004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.817020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.830401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.830433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.830452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.843694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.843726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.843745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.857344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.857378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.857398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.873212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.873248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.873268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.886164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.886210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.886230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.899532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.899565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.899584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.912746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.912778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.912797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.926167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.926212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.926231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.939310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.939348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.939367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.952551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.952583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.952602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.966071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.966098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.966129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.979523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.979555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.979573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:39.992850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:39.992890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:39.992923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:40.006141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:40.006198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:40.006219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:40.021212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:40.021290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:40.021325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:40.035591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:40.035638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:40.035668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.119 [2024-07-14 04:01:40.049655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.119 [2024-07-14 04:01:40.049703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.119 [2024-07-14 04:01:40.049736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.377 [2024-07-14 04:01:40.062726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.377 [2024-07-14 04:01:40.062763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.377 [2024-07-14 04:01:40.062783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.377 [2024-07-14 04:01:40.076128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.377 [2024-07-14 04:01:40.076158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.377 [2024-07-14 04:01:40.076190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.089524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.089557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.089576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.102845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.102895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.102915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.116043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.116070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.116086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.129489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.129521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.129539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.142460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.142492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.142510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.155598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.155631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.155650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.168686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.168718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.168743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.182043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.182071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.182103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.195073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.195099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.195131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.209832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.209873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.209895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.225202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.225237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.225256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.240834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.240874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.240909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.256619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.256654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.256674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.271744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.271779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.271799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.287232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.287267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.287286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.378 [2024-07-14 04:01:40.302798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.378 [2024-07-14 04:01:40.302838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.378 [2024-07-14 04:01:40.302858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.636 [2024-07-14 04:01:40.317690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.636 [2024-07-14 04:01:40.317725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.636 [2024-07-14 04:01:40.317745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.636 [2024-07-14 04:01:40.332381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.636 [2024-07-14 04:01:40.332416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.636 [2024-07-14 04:01:40.332436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.636 [2024-07-14 04:01:40.346441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.636 [2024-07-14 04:01:40.346475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.636 [2024-07-14 04:01:40.346494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.636 [2024-07-14 04:01:40.361826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.361862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.361893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.375426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.375461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.375480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.391635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.391670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.391689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.405904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.405945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.405963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.420715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.420750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.420770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.434559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.434593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.434612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.448553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.448586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.448605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.461381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.461414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.461433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.475026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.475055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.475087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.489682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.489716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.489735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.502396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.502429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.502448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.515339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.515371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.515390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.529380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.529414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.529433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.544230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.544282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.544303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.558389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.558421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.558440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.637 [2024-07-14 04:01:40.572232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.637 [2024-07-14 04:01:40.572260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.637 [2024-07-14 04:01:40.572292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.895 [2024-07-14 04:01:40.586119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.895 [2024-07-14 04:01:40.586148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.895 [2024-07-14 04:01:40.586164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.895 [2024-07-14 04:01:40.599711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.895 [2024-07-14 04:01:40.599743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.895 [2024-07-14 04:01:40.599762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.895 [2024-07-14 04:01:40.613253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.895 [2024-07-14 04:01:40.613285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.895 [2024-07-14 04:01:40.613304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.895 [2024-07-14 04:01:40.627553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.895 [2024-07-14 04:01:40.627586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.895 [2024-07-14 04:01:40.627605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.895 [2024-07-14 04:01:40.641285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.895 [2024-07-14 04:01:40.641317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.895 [2024-07-14 04:01:40.641337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.653783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.653815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.653833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.667445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.667477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.667497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.680374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.680406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.680424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.694202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.694246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.694264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.709269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.709304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.709323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.722244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.722277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.722296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.735403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.735435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.735454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.748511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.748542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.748561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.761966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.761996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.762014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.777486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.777520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.777545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.791408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.791441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.791461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.804417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.804449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.804468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.817941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.817968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.818000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.896 [2024-07-14 04:01:40.832655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:21.896 [2024-07-14 04:01:40.832688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.896 [2024-07-14 04:01:40.832707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.847321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.847354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.847374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.862086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.862115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.862131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.876087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.876116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.876149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.889410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.889442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.889461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.902442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.902480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.902500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.915598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.915630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.915649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.928654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.928686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.928705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.941790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.941821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.941840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.954970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.954998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.955030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.968237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.968268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.968286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.981522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.981553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.981571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:40.994721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:40.994752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.154 [2024-07-14 04:01:40.994771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.154 [2024-07-14 04:01:41.007921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.154 [2024-07-14 04:01:41.007948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.155 [2024-07-14 04:01:41.007984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.155 [2024-07-14 04:01:41.021263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.155 [2024-07-14 04:01:41.021295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.155 [2024-07-14 04:01:41.021313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.155 [2024-07-14 04:01:41.034628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.155 [2024-07-14 04:01:41.034659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.155 [2024-07-14 04:01:41.034677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.155 [2024-07-14 04:01:41.048011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.155 [2024-07-14 04:01:41.048037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.155 [2024-07-14 04:01:41.048069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.155 [2024-07-14 04:01:41.063046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.155 [2024-07-14 04:01:41.063077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.155 [2024-07-14 04:01:41.063110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.155 [2024-07-14 04:01:41.076189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.155 [2024-07-14 04:01:41.076235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.155 [2024-07-14 04:01:41.076254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.155 [2024-07-14 04:01:41.089467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.155 [2024-07-14 04:01:41.089499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.155 [2024-07-14 04:01:41.089518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.102495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.102527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.102546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.115924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.115952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.115983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.128809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.128846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.128874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.142101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.142129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.142161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.155124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.155152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.155185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.168313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.168345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.168363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.181508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.181540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.181558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.194718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.194749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.194768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.208043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.208071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.208087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.221336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.221368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.221386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.234826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.234857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.234883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.247962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.247989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.413 [2024-07-14 04:01:41.248005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.413 [2024-07-14 04:01:41.262673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.413 [2024-07-14 04:01:41.262707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.414 [2024-07-14 04:01:41.262726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.414 [2024-07-14 04:01:41.276602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.414 [2024-07-14 04:01:41.276636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.414 [2024-07-14 04:01:41.276656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.414 [2024-07-14 04:01:41.291467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.414 [2024-07-14 04:01:41.291501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.414 [2024-07-14 04:01:41.291520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.414 [2024-07-14 04:01:41.304745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.414 [2024-07-14 04:01:41.304778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.414 [2024-07-14 04:01:41.304797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.414 [2024-07-14 04:01:41.317951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.414 [2024-07-14 04:01:41.317978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.414 [2024-07-14 04:01:41.318010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.414 [2024-07-14 04:01:41.330953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.414 [2024-07-14 04:01:41.330997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.414 [2024-07-14 04:01:41.331013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.414 [2024-07-14 04:01:41.344119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.414 [2024-07-14 04:01:41.344148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.414 [2024-07-14 04:01:41.344181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.672 [2024-07-14 04:01:41.357239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.672 [2024-07-14 04:01:41.357271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.672 [2024-07-14 04:01:41.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.672 [2024-07-14 04:01:41.370222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.672 [2024-07-14 04:01:41.370269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.672 [2024-07-14 04:01:41.370288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.384724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.384758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.384778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.400112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.400157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.400177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.414647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.414681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.414701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.429276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.429310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.429330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.443683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.443716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.443735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.457987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.458032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.458049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.470981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.471009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.471041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.484356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.484408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.497307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.497339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.497358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.510350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.510382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.510401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.522999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.523026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.523042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.536194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.536243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.536264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.549196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.549240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.549260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.673 [2024-07-14 04:01:41.562305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ab8de0) 00:29:22.673 [2024-07-14 04:01:41.562338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.673 [2024-07-14 04:01:41.562357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.673 00:29:22.673 Latency(us) 00:29:22.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.673 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:22.673 nvme0n1 : 2.00 2270.00 283.75 0.00 0.00 7043.89 5825.42 16019.91 00:29:22.673 =================================================================================================================== 00:29:22.673 Total : 2270.00 283.75 0.00 0.00 7043.89 5825.42 16019.91 00:29:22.673 0 00:29:22.673 04:01:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:22.673 04:01:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:22.673 04:01:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:22.673 | .driver_specific 00:29:22.673 | .nvme_error 00:29:22.673 | .status_code 00:29:22.673 | .command_transient_transport_error' 00:29:22.673 04:01:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:22.930 04:01:41 -- host/digest.sh@71 -- # (( 146 > 0 )) 00:29:22.930 04:01:41 -- host/digest.sh@73 -- # killprocess 2503147 00:29:22.930 04:01:41 -- common/autotest_common.sh@926 -- # '[' -z 2503147 ']' 00:29:22.930 04:01:41 -- common/autotest_common.sh@930 -- # kill -0 2503147 00:29:22.930 04:01:41 -- common/autotest_common.sh@931 -- # uname 00:29:22.931 04:01:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:22.931 04:01:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2503147 00:29:22.931 04:01:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:22.931 04:01:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:22.931 04:01:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2503147' 00:29:22.931 killing process with pid 2503147 00:29:22.931 04:01:41 -- common/autotest_common.sh@945 -- # kill 2503147 00:29:22.931 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.931 00:29:22.931 Latency(us) 00:29:22.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.931 =================================================================================================================== 00:29:22.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.931 04:01:41 -- common/autotest_common.sh@950 -- # wait 2503147 00:29:23.188 04:01:42 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:23.188 04:01:42 -- host/digest.sh@54 -- # local rw bs qd 00:29:23.188 04:01:42 -- host/digest.sh@56 -- # rw=randwrite 00:29:23.188 04:01:42 -- host/digest.sh@56 -- # bs=4096 00:29:23.188 04:01:42 -- host/digest.sh@56 -- # qd=128 00:29:23.188 04:01:42 -- host/digest.sh@58 -- # bperfpid=2503698 00:29:23.188 04:01:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:23.188 04:01:42 -- host/digest.sh@60 -- # waitforlisten 2503698 /var/tmp/bperf.sock 00:29:23.188 04:01:42 -- common/autotest_common.sh@819 -- # '[' -z 2503698 ']' 00:29:23.188 04:01:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:23.188 04:01:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:23.188 04:01:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:23.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:23.189 04:01:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:23.189 04:01:42 -- common/autotest_common.sh@10 -- # set +x 00:29:23.189 [2024-07-14 04:01:42.098015] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:23.189 [2024-07-14 04:01:42.098093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503698 ] 00:29:23.189 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.447 [2024-07-14 04:01:42.157276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.447 [2024-07-14 04:01:42.245389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.380 04:01:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:24.380 04:01:43 -- common/autotest_common.sh@852 -- # return 0 00:29:24.380 04:01:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.380 04:01:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.637 04:01:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:24.637 04:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.637 04:01:43 -- common/autotest_common.sh@10 -- # set +x 00:29:24.637 04:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.637 04:01:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.637 04:01:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.202 nvme0n1 00:29:25.202 04:01:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:25.202 04:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.202 04:01:43 -- common/autotest_common.sh@10 -- # set +x 00:29:25.202 04:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.202 04:01:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:25.202 04:01:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.202 Running I/O for 2 seconds... 00:29:25.202 [2024-07-14 04:01:43.995584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:43.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:43.995962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.008315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.008640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.008671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.021312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.021583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.021613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.034114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.034385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.034429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.046904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.047208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.047236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.059704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.059970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.059999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.072404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.072692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.072722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.085155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.085496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.085525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.098501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.098812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.098843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.112042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.112335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.112378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.125651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.125970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.125997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.202 [2024-07-14 04:01:44.139365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.202 [2024-07-14 04:01:44.139693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.202 [2024-07-14 04:01:44.139725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.153376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.153688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.153719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.166824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.167112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.167155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.180202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.180484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.180515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.193712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.194022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.194055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.207329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.207638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.207670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.220845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.221158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.221190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.234387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.234691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.234722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.247881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.248217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.248248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.261320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.261635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.460 [2024-07-14 04:01:44.261665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.460 [2024-07-14 04:01:44.274872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.460 [2024-07-14 04:01:44.275144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.275189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.288309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.288588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.288619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.301734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.302034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.302061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.315569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.315893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.315936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.329058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.329380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.329411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.342563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.342879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.342924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.355989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.356315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.356345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.369442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.369754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.369784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.382859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.383206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.383237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.461 [2024-07-14 04:01:44.396394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.461 [2024-07-14 04:01:44.396682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.461 [2024-07-14 04:01:44.396713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.410395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.410674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.410705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.423839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.424231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.424263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.437335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.437623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.437654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.450816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.451129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.451157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.464311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.464618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.464649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.477822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.478112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.478140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.491207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.491511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.491542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.504752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.505061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.505089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.518117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.518435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.518466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.531572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.531905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.531933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.545002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.545318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.545356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.558489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.558796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.558828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.572056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.572380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.572410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.585512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.585828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.585859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.599005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.599331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.599362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.612416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.612696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.612726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.625972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.626287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.626318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.639478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.639783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.639814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.720 [2024-07-14 04:01:44.652891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.720 [2024-07-14 04:01:44.653204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.720 [2024-07-14 04:01:44.653231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.667038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.667363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.667394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.680644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.680971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.681000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.694079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.694379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.694410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.707647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.707989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.708018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.721178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.721465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.721495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.734724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.735008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.735035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.748214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.748519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.748550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.761688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.761997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.762026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.774412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.774707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.774735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.786709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.787083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.787111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.799040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.799392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.811442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.811729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.811757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.823574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.823876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.979 [2024-07-14 04:01:44.823903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.979 [2024-07-14 04:01:44.836064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.979 [2024-07-14 04:01:44.836382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.980 [2024-07-14 04:01:44.836410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.980 [2024-07-14 04:01:44.848403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.980 [2024-07-14 04:01:44.848654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.980 [2024-07-14 04:01:44.848681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.980 [2024-07-14 04:01:44.860573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.980 [2024-07-14 04:01:44.860901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.980 [2024-07-14 04:01:44.860928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.980 [2024-07-14 04:01:44.873008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.980 [2024-07-14 04:01:44.873260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.980 [2024-07-14 04:01:44.873287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.980 [2024-07-14 04:01:44.885203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.980 [2024-07-14 04:01:44.885516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.980 [2024-07-14 04:01:44.885549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.980 [2024-07-14 04:01:44.897480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.980 [2024-07-14 04:01:44.897795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.980 [2024-07-14 04:01:44.897823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.980 [2024-07-14 04:01:44.909872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:25.980 [2024-07-14 04:01:44.910157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.980 [2024-07-14 04:01:44.910199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.238 [2024-07-14 04:01:44.923453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.238 [2024-07-14 04:01:44.923736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.238 [2024-07-14 04:01:44.923767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.238 [2024-07-14 04:01:44.937088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.238 [2024-07-14 04:01:44.937402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.238 [2024-07-14 04:01:44.937433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.238 [2024-07-14 04:01:44.950655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.238 [2024-07-14 04:01:44.950958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.238 [2024-07-14 04:01:44.950987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.238 [2024-07-14 04:01:44.964225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:44.964507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:44.964539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:44.976840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:44.977116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:44.977144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:44.989662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:44.989965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:44.989994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.003318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.003645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.003678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.016720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.017079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.017108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.030340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.030619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.030649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.044010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.044327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.044358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.057641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.057943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.057986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.071172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.071494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.071524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.084693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.085005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.085033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.097908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.098235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.098263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.111445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.111753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.111784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.125073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.125408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.125439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.138684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.138979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.139007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.152215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.152524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.152555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.239 [2024-07-14 04:01:45.165691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.239 [2024-07-14 04:01:45.166005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.239 [2024-07-14 04:01:45.166034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.179756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.180079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.180108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.193394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.193708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.193738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.206856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.207219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.207250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.220415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.220722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.220753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.233873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.234222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.234253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.247418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.247694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.247725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.260844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.261149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.261193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.274344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.274656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.274687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.287638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.287960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.288003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.301115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.301439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.301469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.314889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.315228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.315259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.328396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.328705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.328735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.341861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.342222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.342253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.355376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.355683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.355719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.368808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.369147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.369175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.382331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.382639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.382669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.395843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.396155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.396199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.409317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.409630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.409661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.422574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.422891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.422934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.498 [2024-07-14 04:01:45.436258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.498 [2024-07-14 04:01:45.436573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.498 [2024-07-14 04:01:45.436603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.450104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.450396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.450426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.463690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.464007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.464036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.477169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.477472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.477502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.490638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.490965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.490993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.504124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.504477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.504508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.517674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.517974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.518002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.531133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.531480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.531512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.544472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.544777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.757 [2024-07-14 04:01:45.544810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.757 [2024-07-14 04:01:45.557981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.757 [2024-07-14 04:01:45.558297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.558329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.571503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.571808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.571839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.585021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.585346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.585377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.598552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.598858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.598913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.611909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.612184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.612215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.625421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.625730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.625760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.638853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.639214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.639245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.652289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.652596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.652626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.665789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.666129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.666157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.679244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.679546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.679576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:26.758 [2024-07-14 04:01:45.692817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:26.758 [2024-07-14 04:01:45.693132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.758 [2024-07-14 04:01:45.693177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.706926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.707239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.707275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.720354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.720665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.720696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.733861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.734223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.734254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.747357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.747666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.747698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.760882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.761220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.761251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.774348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.774658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.774689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.787810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.788121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.788149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.801166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.801444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.801474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.814633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.814964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.828278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.828596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.828627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.841717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.842045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.842073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.855134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.016 [2024-07-14 04:01:45.855456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.016 [2024-07-14 04:01:45.855487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.016 [2024-07-14 04:01:45.868596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.017 [2024-07-14 04:01:45.868926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.017 [2024-07-14 04:01:45.868954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.017 [2024-07-14 04:01:45.882081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.017 [2024-07-14 04:01:45.882414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.017 [2024-07-14 04:01:45.882444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.017 [2024-07-14 04:01:45.895645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.017 [2024-07-14 04:01:45.895973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.017 [2024-07-14 04:01:45.896001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.017 [2024-07-14 04:01:45.909071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.017 [2024-07-14 04:01:45.909369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.017 [2024-07-14 04:01:45.909399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.017 [2024-07-14 04:01:45.922632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.017 [2024-07-14 04:01:45.922930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.017 [2024-07-14 04:01:45.922958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.017 [2024-07-14 04:01:45.936091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.017 [2024-07-14 04:01:45.936431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.017 [2024-07-14 04:01:45.936461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.017 [2024-07-14 04:01:45.949614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.017 [2024-07-14 04:01:45.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.017 [2024-07-14 04:01:45.949966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.274 [2024-07-14 04:01:45.963683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.274 [2024-07-14 04:01:45.964003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.274 [2024-07-14 04:01:45.964031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.274 [2024-07-14 04:01:45.977127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.274 [2024-07-14 04:01:45.977451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.274 [2024-07-14 04:01:45.977481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.274 [2024-07-14 04:01:45.990604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56260) with pdu=0x2000190fda78 00:29:27.274 [2024-07-14 04:01:45.990873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.274 [2024-07-14 04:01:45.990918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:27.274 00:29:27.274 Latency(us) 00:29:27.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.274 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.274 nvme0n1 : 2.01 19073.57 74.51 0.00 0.00 6695.96 2864.17 14175.19 00:29:27.274 =================================================================================================================== 00:29:27.274 Total : 19073.57 74.51 0.00 0.00 6695.96 2864.17 14175.19 00:29:27.274 0 00:29:27.274 04:01:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:27.274 04:01:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:27.274 04:01:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:27.274 04:01:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:27.274 | .driver_specific 00:29:27.274 | .nvme_error 00:29:27.274 | .status_code 00:29:27.274 | .command_transient_transport_error' 00:29:27.532 04:01:46 -- host/digest.sh@71 -- # (( 150 > 0 )) 00:29:27.532 04:01:46 -- host/digest.sh@73 -- # killprocess 2503698 00:29:27.532 04:01:46 -- common/autotest_common.sh@926 -- # '[' -z 2503698 ']' 00:29:27.532 04:01:46 -- common/autotest_common.sh@930 -- # kill -0 2503698 00:29:27.532 04:01:46 -- common/autotest_common.sh@931 -- # uname 00:29:27.532 04:01:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:27.532 04:01:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2503698 00:29:27.532 04:01:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:27.532 04:01:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:27.532 04:01:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2503698' 00:29:27.532 killing process with pid 2503698 00:29:27.532 04:01:46 -- common/autotest_common.sh@945 -- # kill 2503698 00:29:27.532 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.532 00:29:27.532 Latency(us) 00:29:27.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.532 =================================================================================================================== 00:29:27.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.532 04:01:46 -- common/autotest_common.sh@950 -- # wait 2503698 00:29:27.790 04:01:46 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:27.790 04:01:46 -- host/digest.sh@54 -- # local rw bs qd 00:29:27.790 04:01:46 -- host/digest.sh@56 -- # rw=randwrite 00:29:27.790 04:01:46 -- host/digest.sh@56 -- # bs=131072 00:29:27.790 04:01:46 -- host/digest.sh@56 -- # qd=16 00:29:27.790 04:01:46 -- host/digest.sh@58 -- # bperfpid=2504253 00:29:27.790 04:01:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:27.790 04:01:46 -- host/digest.sh@60 -- # waitforlisten 2504253 /var/tmp/bperf.sock 00:29:27.790 04:01:46 -- common/autotest_common.sh@819 -- # '[' -z 2504253 ']' 00:29:27.790 04:01:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.790 04:01:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:27.790 04:01:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.790 04:01:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:27.790 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:29:27.790 [2024-07-14 04:01:46.568208] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:27.790 [2024-07-14 04:01:46.568285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504253 ] 00:29:27.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:27.790 Zero copy mechanism will not be used. 00:29:27.790 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.790 [2024-07-14 04:01:46.630660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.790 [2024-07-14 04:01:46.716636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.723 04:01:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:28.723 04:01:47 -- common/autotest_common.sh@852 -- # return 0 00:29:28.723 04:01:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.723 04:01:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.979 04:01:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:28.979 04:01:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.979 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:29:28.979 04:01:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.979 04:01:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.979 04:01:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.236 nvme0n1 00:29:29.236 04:01:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:29.236 04:01:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.236 04:01:48 -- common/autotest_common.sh@10 -- # set +x 00:29:29.236 04:01:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.236 04:01:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:29.236 04:01:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.494 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.494 Zero copy mechanism will not be used. 00:29:29.494 Running I/O for 2 seconds... 00:29:29.494 [2024-07-14 04:01:48.289480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.289853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.289929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.494 [2024-07-14 04:01:48.309926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.310673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.310706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.494 [2024-07-14 04:01:48.331364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.331902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.494 [2024-07-14 04:01:48.351217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.351752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.351781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.494 [2024-07-14 04:01:48.368471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.368922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.368951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.494 [2024-07-14 04:01:48.388244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.388851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.388888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.494 [2024-07-14 04:01:48.409824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.410461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.410498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.494 [2024-07-14 04:01:48.430188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.494 [2024-07-14 04:01:48.430728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.494 [2024-07-14 04:01:48.430758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.450828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.451318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.451348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.469580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.470276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.470305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.490133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.490544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.490573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.511985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.512577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.512605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.533272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.533778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.533805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.552412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.552730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.552760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.570561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.571032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.571061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.590921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.591579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.591607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.611923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.612320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.612349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.630925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.631564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.631591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.651555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.652090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.652119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.671729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.672327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.672355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.753 [2024-07-14 04:01:48.689892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:29.753 [2024-07-14 04:01:48.690401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-07-14 04:01:48.690429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.707462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.707881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.726151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.726756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.726785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.746696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.747173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.747202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.768021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.768553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.768581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.786991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.787471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.787500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.808210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.808811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.808861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.829571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.830185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.830214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.850095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.850747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.850774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.871365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.871874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.871902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.890029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.890569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.890596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.910298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.910886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.910914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.011 [2024-07-14 04:01:48.931327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.011 [2024-07-14 04:01:48.931939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.011 [2024-07-14 04:01:48.931967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:48.951354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:48.951805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:48.951832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:48.969397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:48.969873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:48.969901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:48.990812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:48.991363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:48.991390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.011271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.011790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.011818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.031277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.031738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.031766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.049566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.049999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.069189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.069876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.069905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.089975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.090278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.090306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.110159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.110699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.110728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.131413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.131915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.131943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.152321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.152787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.152829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.173397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.173875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.173904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.269 [2024-07-14 04:01:49.193582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.269 [2024-07-14 04:01:49.194114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.269 [2024-07-14 04:01:49.194157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.213662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.214003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.214031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.231403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.232008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.232036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.252728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.253314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.253343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.273026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.273399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.273427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.293622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.294168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.294196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.314675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.315149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.315177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.334733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.335280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.335312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.356338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.356965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.356993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.377251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.377837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.377870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.397900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.398350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.398378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.418432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.419067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.419096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.439068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.439318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.439347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.527 [2024-07-14 04:01:49.458428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.527 [2024-07-14 04:01:49.458874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.527 [2024-07-14 04:01:49.458903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.477748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.478278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.478306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.497430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.497905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.497934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.516901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.517348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.517377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.536273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.536909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.536937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.556792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.557327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.557356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.577614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.578004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.578034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.597348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.597915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.597945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.617307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.617899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.617927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.637983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.638631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.638658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.657381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.785 [2024-07-14 04:01:49.657840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.785 [2024-07-14 04:01:49.657890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.785 [2024-07-14 04:01:49.675253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.786 [2024-07-14 04:01:49.675726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.786 [2024-07-14 04:01:49.675754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.786 [2024-07-14 04:01:49.696112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.786 [2024-07-14 04:01:49.696783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.786 [2024-07-14 04:01:49.696812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.786 [2024-07-14 04:01:49.716046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:30.786 [2024-07-14 04:01:49.716519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.786 [2024-07-14 04:01:49.716548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.045 [2024-07-14 04:01:49.736375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.045 [2024-07-14 04:01:49.737042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-07-14 04:01:49.737071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.045 [2024-07-14 04:01:49.756077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.756686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.756714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.774056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.774814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.774841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.793906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.794389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.794417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.815485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.816050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.816078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.836898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.837286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.837314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.852736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.853262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.853296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.870918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.871361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.871389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.890344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.890916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.890944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.911375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.911830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.911859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.932044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.932737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.932765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.952449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.952963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.952990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.046 [2024-07-14 04:01:49.972182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.046 [2024-07-14 04:01:49.972732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-07-14 04:01:49.972759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:49.990298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:49.990542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:49.990569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.007342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.007946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.007981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.026282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.026757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.026798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.042711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.043315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.043352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.061032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.061604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.061638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.080563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.081078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.081107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.101599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.102027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.102055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.121652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.122053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.122083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.142303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.142626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.142654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.163010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.163534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.327 [2024-07-14 04:01:50.163563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.327 [2024-07-14 04:01:50.184079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.327 [2024-07-14 04:01:50.184466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.328 [2024-07-14 04:01:50.184502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.328 [2024-07-14 04:01:50.204997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.328 [2024-07-14 04:01:50.205580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.328 [2024-07-14 04:01:50.205608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.328 [2024-07-14 04:01:50.223521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.328 [2024-07-14 04:01:50.224192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.328 [2024-07-14 04:01:50.224220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.328 [2024-07-14 04:01:50.243945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.328 [2024-07-14 04:01:50.244411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.328 [2024-07-14 04:01:50.244439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.590 [2024-07-14 04:01:50.263381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa56390) with pdu=0x2000190fef90 00:29:31.590 [2024-07-14 04:01:50.263903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.590 [2024-07-14 04:01:50.263934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.590 00:29:31.590 Latency(us) 00:29:31.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.590 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:31.590 nvme0n1 : 2.01 1552.41 194.05 0.00 0.00 10278.85 7378.87 22427.88 00:29:31.590 =================================================================================================================== 00:29:31.590 Total : 1552.41 194.05 0.00 0.00 10278.85 7378.87 22427.88 00:29:31.590 0 00:29:31.590 04:01:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:31.590 04:01:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:31.590 04:01:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:31.590 04:01:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:31.590 | .driver_specific 00:29:31.590 | .nvme_error 00:29:31.590 | .status_code 00:29:31.590 | .command_transient_transport_error' 00:29:31.848 04:01:50 -- host/digest.sh@71 -- # (( 100 > 0 )) 00:29:31.848 04:01:50 -- host/digest.sh@73 -- # killprocess 2504253 00:29:31.848 04:01:50 -- common/autotest_common.sh@926 -- # '[' -z 2504253 ']' 00:29:31.848 04:01:50 -- common/autotest_common.sh@930 -- # kill -0 2504253 00:29:31.848 04:01:50 -- common/autotest_common.sh@931 -- # uname 00:29:31.848 04:01:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:31.848 04:01:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2504253 00:29:31.848 04:01:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:31.848 04:01:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:31.848 04:01:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2504253' 00:29:31.848 killing process with pid 2504253 00:29:31.848 04:01:50 -- common/autotest_common.sh@945 -- # kill 2504253 00:29:31.848 Received shutdown signal, test time was about 2.000000 seconds 00:29:31.848 00:29:31.848 Latency(us) 00:29:31.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.848 =================================================================================================================== 00:29:31.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.848 04:01:50 -- common/autotest_common.sh@950 -- # wait 2504253 00:29:31.848 04:01:50 -- host/digest.sh@115 -- # killprocess 2502567 00:29:31.848 04:01:50 -- common/autotest_common.sh@926 -- # '[' -z 2502567 ']' 00:29:31.848 04:01:50 -- common/autotest_common.sh@930 -- # kill -0 2502567 00:29:31.848 04:01:50 -- common/autotest_common.sh@931 -- # uname 00:29:31.848 04:01:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:31.848 04:01:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2502567 00:29:32.105 04:01:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:32.105 04:01:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:32.105 04:01:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2502567' 00:29:32.105 killing process with pid 2502567 00:29:32.105 04:01:50 -- common/autotest_common.sh@945 -- # kill 2502567 00:29:32.105 04:01:50 -- common/autotest_common.sh@950 -- # wait 2502567 00:29:32.105 00:29:32.105 real 0m17.815s 00:29:32.105 user 0m36.295s 00:29:32.105 sys 0m4.145s 00:29:32.105 04:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.105 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:29:32.105 ************************************ 00:29:32.105 END TEST nvmf_digest_error 00:29:32.105 ************************************ 00:29:32.363 04:01:51 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:32.363 04:01:51 -- host/digest.sh@139 -- # nvmftestfini 00:29:32.363 04:01:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:32.363 04:01:51 -- nvmf/common.sh@116 -- # sync 00:29:32.363 04:01:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:32.363 04:01:51 -- nvmf/common.sh@119 -- # set +e 00:29:32.363 04:01:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:32.363 04:01:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:32.363 rmmod nvme_tcp 00:29:32.363 rmmod nvme_fabrics 00:29:32.363 rmmod nvme_keyring 00:29:32.363 04:01:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:32.363 04:01:51 -- nvmf/common.sh@123 -- # set -e 00:29:32.363 04:01:51 -- nvmf/common.sh@124 -- # return 0 00:29:32.363 04:01:51 -- nvmf/common.sh@477 -- # '[' -n 2502567 ']' 00:29:32.363 04:01:51 -- nvmf/common.sh@478 -- # killprocess 2502567 00:29:32.363 04:01:51 -- common/autotest_common.sh@926 -- # '[' -z 2502567 ']' 00:29:32.363 04:01:51 -- common/autotest_common.sh@930 -- # kill -0 2502567 00:29:32.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2502567) - No such process 00:29:32.363 04:01:51 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2502567 is not found' 00:29:32.363 Process with pid 2502567 is not found 00:29:32.363 04:01:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:32.363 04:01:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:32.363 04:01:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:32.363 04:01:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:32.363 04:01:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:32.363 04:01:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.363 04:01:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.363 04:01:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.267 04:01:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:34.267 00:29:34.267 real 0m37.131s 00:29:34.267 user 1m7.258s 00:29:34.267 sys 0m9.422s 00:29:34.267 04:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:34.267 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:29:34.267 ************************************ 00:29:34.267 END TEST nvmf_digest 00:29:34.267 ************************************ 00:29:34.267 04:01:53 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:34.267 04:01:53 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:34.267 04:01:53 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:34.267 04:01:53 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:34.267 04:01:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:34.267 04:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:34.267 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:29:34.267 ************************************ 00:29:34.267 START TEST nvmf_bdevperf 00:29:34.267 ************************************ 00:29:34.267 04:01:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:34.525 * Looking for test storage... 00:29:34.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.525 04:01:53 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.525 04:01:53 -- nvmf/common.sh@7 -- # uname -s 00:29:34.525 04:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.525 04:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.525 04:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.525 04:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.525 04:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.525 04:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.525 04:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.525 04:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.525 04:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.525 04:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.525 04:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.525 04:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.525 04:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.525 04:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.525 04:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.525 04:01:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.525 04:01:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.525 04:01:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.525 04:01:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.525 04:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 04:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 04:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 04:01:53 -- paths/export.sh@5 -- # export PATH 00:29:34.525 04:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 04:01:53 -- nvmf/common.sh@46 -- # : 0 00:29:34.525 04:01:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:34.525 04:01:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:34.525 04:01:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:34.525 04:01:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.525 04:01:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.525 04:01:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:34.525 04:01:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:34.525 04:01:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:34.525 04:01:53 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:34.525 04:01:53 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:34.525 04:01:53 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:34.525 04:01:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:34.525 04:01:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.525 04:01:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:34.525 04:01:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:34.525 04:01:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:34.525 04:01:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.525 04:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:34.525 04:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.525 04:01:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:34.525 04:01:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:34.525 04:01:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:34.525 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:29:36.426 04:01:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:36.426 04:01:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:36.426 04:01:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:36.426 04:01:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:36.426 04:01:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:36.426 04:01:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:36.426 04:01:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:36.426 04:01:55 -- nvmf/common.sh@294 -- # net_devs=() 00:29:36.426 04:01:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:36.426 04:01:55 -- nvmf/common.sh@295 -- # e810=() 00:29:36.426 04:01:55 -- nvmf/common.sh@295 -- # local -ga e810 00:29:36.426 04:01:55 -- nvmf/common.sh@296 -- # x722=() 00:29:36.426 04:01:55 -- nvmf/common.sh@296 -- # local -ga x722 00:29:36.426 04:01:55 -- nvmf/common.sh@297 -- # mlx=() 00:29:36.426 04:01:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:36.426 04:01:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.426 04:01:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:36.426 04:01:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:36.426 04:01:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:36.426 04:01:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.426 04:01:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:36.426 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:36.426 04:01:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.426 04:01:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:36.426 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:36.426 04:01:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:36.426 04:01:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:36.426 04:01:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.427 04:01:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.427 04:01:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.427 04:01:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.427 04:01:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:36.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:36.427 04:01:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.427 04:01:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.427 04:01:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.427 04:01:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.427 04:01:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.427 04:01:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:36.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:36.427 04:01:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.427 04:01:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:36.427 04:01:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:36.427 04:01:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:36.427 04:01:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:36.427 04:01:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:36.427 04:01:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.427 04:01:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.427 04:01:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.427 04:01:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:36.427 04:01:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.427 04:01:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.427 04:01:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:36.427 04:01:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.427 04:01:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.427 04:01:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:36.427 04:01:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:36.427 04:01:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.427 04:01:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.427 04:01:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.427 04:01:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.427 04:01:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:36.427 04:01:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.427 04:01:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.427 04:01:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.685 04:01:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:36.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:29:36.685 00:29:36.685 --- 10.0.0.2 ping statistics --- 00:29:36.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.685 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:36.685 04:01:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:36.685 00:29:36.685 --- 10.0.0.1 ping statistics --- 00:29:36.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.685 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:36.685 04:01:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.685 04:01:55 -- nvmf/common.sh@410 -- # return 0 00:29:36.685 04:01:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:36.685 04:01:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.685 04:01:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:36.685 04:01:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:36.685 04:01:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.686 04:01:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:36.686 04:01:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:36.686 04:01:55 -- host/bdevperf.sh@25 -- # tgt_init 00:29:36.686 04:01:55 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:36.686 04:01:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:36.686 04:01:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:36.686 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:29:36.686 04:01:55 -- nvmf/common.sh@469 -- # nvmfpid=2506755 00:29:36.686 04:01:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:36.686 04:01:55 -- nvmf/common.sh@470 -- # waitforlisten 2506755 00:29:36.686 04:01:55 -- common/autotest_common.sh@819 -- # '[' -z 2506755 ']' 00:29:36.686 04:01:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.686 04:01:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:36.686 04:01:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.686 04:01:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:36.686 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:29:36.686 [2024-07-14 04:01:55.444446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:36.686 [2024-07-14 04:01:55.444516] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.686 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.686 [2024-07-14 04:01:55.508027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:36.686 [2024-07-14 04:01:55.596277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:36.686 [2024-07-14 04:01:55.596441] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.686 [2024-07-14 04:01:55.596461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.686 [2024-07-14 04:01:55.596476] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.686 [2024-07-14 04:01:55.596593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.686 [2024-07-14 04:01:55.596661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.686 [2024-07-14 04:01:55.596664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.619 04:01:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:37.619 04:01:56 -- common/autotest_common.sh@852 -- # return 0 00:29:37.619 04:01:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:37.619 04:01:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:37.619 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:29:37.619 04:01:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.619 04:01:56 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:37.619 04:01:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.619 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:29:37.619 [2024-07-14 04:01:56.438628] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.619 04:01:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.619 04:01:56 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.619 04:01:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.619 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:29:37.619 Malloc0 00:29:37.619 04:01:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.619 04:01:56 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.619 04:01:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.619 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:29:37.619 04:01:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.619 04:01:56 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.619 04:01:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.619 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:29:37.619 04:01:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.619 04:01:56 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.619 04:01:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.619 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:29:37.619 [2024-07-14 04:01:56.496370] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.619 04:01:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.619 04:01:56 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:37.619 04:01:56 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:37.619 04:01:56 -- nvmf/common.sh@520 -- # config=() 00:29:37.619 04:01:56 -- nvmf/common.sh@520 -- # local subsystem config 00:29:37.619 04:01:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:37.619 04:01:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:37.619 { 00:29:37.619 "params": { 00:29:37.619 "name": "Nvme$subsystem", 00:29:37.619 "trtype": "$TEST_TRANSPORT", 00:29:37.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.619 "adrfam": "ipv4", 00:29:37.619 "trsvcid": "$NVMF_PORT", 00:29:37.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.619 "hdgst": ${hdgst:-false}, 00:29:37.619 "ddgst": ${ddgst:-false} 00:29:37.619 }, 00:29:37.619 "method": "bdev_nvme_attach_controller" 00:29:37.619 } 00:29:37.619 EOF 00:29:37.619 )") 00:29:37.619 04:01:56 -- nvmf/common.sh@542 -- # cat 00:29:37.619 04:01:56 -- nvmf/common.sh@544 -- # jq . 00:29:37.619 04:01:56 -- nvmf/common.sh@545 -- # IFS=, 00:29:37.619 04:01:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:37.619 "params": { 00:29:37.619 "name": "Nvme1", 00:29:37.619 "trtype": "tcp", 00:29:37.619 "traddr": "10.0.0.2", 00:29:37.619 "adrfam": "ipv4", 00:29:37.619 "trsvcid": "4420", 00:29:37.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.620 "hdgst": false, 00:29:37.620 "ddgst": false 00:29:37.620 }, 00:29:37.620 "method": "bdev_nvme_attach_controller" 00:29:37.620 }' 00:29:37.620 [2024-07-14 04:01:56.542260] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:37.620 [2024-07-14 04:01:56.542340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506910 ] 00:29:37.878 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.878 [2024-07-14 04:01:56.604728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.878 [2024-07-14 04:01:56.689511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.136 Running I/O for 1 seconds... 00:29:39.509 00:29:39.509 Latency(us) 00:29:39.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:39.509 Verification LBA range: start 0x0 length 0x4000 00:29:39.509 Nvme1n1 : 1.01 12939.00 50.54 0.00 0.00 9844.32 1310.72 17087.91 00:29:39.509 =================================================================================================================== 00:29:39.509 Total : 12939.00 50.54 0.00 0.00 9844.32 1310.72 17087.91 00:29:39.509 04:01:58 -- host/bdevperf.sh@30 -- # bdevperfpid=2507060 00:29:39.509 04:01:58 -- host/bdevperf.sh@32 -- # sleep 3 00:29:39.509 04:01:58 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:39.509 04:01:58 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:39.509 04:01:58 -- nvmf/common.sh@520 -- # config=() 00:29:39.509 04:01:58 -- nvmf/common.sh@520 -- # local subsystem config 00:29:39.509 04:01:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:39.509 04:01:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:39.509 { 00:29:39.509 "params": { 00:29:39.509 "name": "Nvme$subsystem", 00:29:39.509 "trtype": "$TEST_TRANSPORT", 00:29:39.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.509 "adrfam": "ipv4", 00:29:39.509 "trsvcid": "$NVMF_PORT", 00:29:39.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.509 "hdgst": ${hdgst:-false}, 00:29:39.509 "ddgst": ${ddgst:-false} 00:29:39.509 }, 00:29:39.509 "method": "bdev_nvme_attach_controller" 00:29:39.509 } 00:29:39.509 EOF 00:29:39.509 )") 00:29:39.509 04:01:58 -- nvmf/common.sh@542 -- # cat 00:29:39.509 04:01:58 -- nvmf/common.sh@544 -- # jq . 00:29:39.509 04:01:58 -- nvmf/common.sh@545 -- # IFS=, 00:29:39.509 04:01:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:39.509 "params": { 00:29:39.509 "name": "Nvme1", 00:29:39.509 "trtype": "tcp", 00:29:39.509 "traddr": "10.0.0.2", 00:29:39.509 "adrfam": "ipv4", 00:29:39.509 "trsvcid": "4420", 00:29:39.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.509 "hdgst": false, 00:29:39.509 "ddgst": false 00:29:39.509 }, 00:29:39.509 "method": "bdev_nvme_attach_controller" 00:29:39.509 }' 00:29:39.509 [2024-07-14 04:01:58.299110] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:39.509 [2024-07-14 04:01:58.299212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507060 ] 00:29:39.509 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.509 [2024-07-14 04:01:58.360250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.510 [2024-07-14 04:01:58.444808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.766 Running I/O for 15 seconds... 00:29:43.050 04:02:01 -- host/bdevperf.sh@33 -- # kill -9 2506755 00:29:43.050 04:02:01 -- host/bdevperf.sh@35 -- # sleep 3 00:29:43.050 [2024-07-14 04:02:01.269938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.269990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.050 [2024-07-14 04:02:01.270458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.050 [2024-07-14 04:02:01.270476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.270982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.270997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.051 [2024-07-14 04:02:01.271188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.051 [2024-07-14 04:02:01.271294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.051 [2024-07-14 04:02:01.271426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.051 [2024-07-14 04:02:01.271526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.051 [2024-07-14 04:02:01.271658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.051 [2024-07-14 04:02:01.271694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.051 [2024-07-14 04:02:01.271887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.051 [2024-07-14 04:02:01.271903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.271935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.271949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.271965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.271979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.271994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.272792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.272976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.272990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.052 [2024-07-14 04:02:01.273291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.273324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.052 [2024-07-14 04:02:01.273357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.052 [2024-07-14 04:02:01.273374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.273930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.273977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.273992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.274021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.274050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.274080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.274109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.053 [2024-07-14 04:02:01.274190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.053 [2024-07-14 04:02:01.274430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a7590 is same with the state(5) to be set 00:29:43.053 [2024-07-14 04:02:01.274467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:43.053 [2024-07-14 04:02:01.274479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:43.053 [2024-07-14 04:02:01.274492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:8 PRP1 0x0 PRP2 0x0 00:29:43.053 [2024-07-14 04:02:01.274514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274587] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17a7590 was disconnected and freed. reset controller. 00:29:43.053 [2024-07-14 04:02:01.274664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.053 [2024-07-14 04:02:01.274689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.053 [2024-07-14 04:02:01.274722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.053 [2024-07-14 04:02:01.274752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.053 [2024-07-14 04:02:01.274796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.053 [2024-07-14 04:02:01.274809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.053 [2024-07-14 04:02:01.277429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-14 04:02:01.277473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.053 [2024-07-14 04:02:01.278209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-14 04:02:01.278466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-14 04:02:01.278501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-14 04:02:01.278521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.053 [2024-07-14 04:02:01.278708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.053 [2024-07-14 04:02:01.278957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-14 04:02:01.278981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-14 04:02:01.278998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-14 04:02:01.281309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-14 04:02:01.290069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.290451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.290852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.290940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.290957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.291106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.291326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.291351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.291368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.293700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.302526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.302920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.303174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.303204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.303223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.303389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.303577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.303602] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.303619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.306097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.315085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.315493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.315916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.315947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.315971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.316155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.316325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.316349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.316366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.318679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.327606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.328055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.328231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.328261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.328279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.328463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.328650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.328676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.328692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.330776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.340179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.340564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.340848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.340889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.340909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.341076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.341282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.341307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.341324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.343528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.352739] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.353116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.353297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.353328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.353347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.353536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.353707] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.353732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.353749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.356254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.365254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.365657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.365850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.365892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.365913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.366061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.366249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.366275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.366291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.368694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.377850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.378210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.378439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-14 04:02:01.378469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-14 04:02:01.378488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.054 [2024-07-14 04:02:01.378654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.054 [2024-07-14 04:02:01.378860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-14 04:02:01.378896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-14 04:02:01.378914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-14 04:02:01.381058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-14 04:02:01.390307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-14 04:02:01.390716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.390927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.390957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.390975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.391124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.391245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.391271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.391287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.393528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.402883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.403299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.403540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.403583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.403600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.403766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.403985] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.404012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.404029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.406306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.415494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.415840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.416080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.416110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.416129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.416297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.416466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.416492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.416508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.418958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.427998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.428363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.428667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.428697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.428715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.428863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.429100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.429131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.429149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.431461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.440610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.441022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.441265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.441295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.441314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.441516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.441678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.441704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.441720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.444349] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.453295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.453849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.454115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.454146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.454164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.454313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.454500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.454526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.454542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.457040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.465899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.466334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.466572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.466638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.466657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.466889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.467116] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.467142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.467164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.469730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.478459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.478873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.479052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.479081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.479099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.479248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.479416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.479442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.479459] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.481719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.491219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.491621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.491825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.491854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.491884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.492017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.492204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.492230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.055 [2024-07-14 04:02:01.492246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.055 [2024-07-14 04:02:01.494647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.055 [2024-07-14 04:02:01.503982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.055 [2024-07-14 04:02:01.504367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.504597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.055 [2024-07-14 04:02:01.504627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.055 [2024-07-14 04:02:01.504646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.055 [2024-07-14 04:02:01.504848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.055 [2024-07-14 04:02:01.505030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.055 [2024-07-14 04:02:01.505056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.505072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.507659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.516713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.517135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.517502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.517553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.517571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.517719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.517919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.517943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.517959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.520163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.529289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.529678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.529882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.529913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.529932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.530117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.530286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.530311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.530328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.532569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.541832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.542266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.542572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.542636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.542654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.542821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.543019] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.543046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.543063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.545245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.554282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.554779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.555031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.555063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.555082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.555248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.555453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.555478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.555495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.557829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.566876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.567239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.567501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.567531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.567549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.567716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.567896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.567922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.567939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.570319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.579348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.579758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.580019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.580052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.580071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.580274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.580461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.580486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.580503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.582778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.591790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.592201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.592503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.592533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.592561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.592727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.592891] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.592928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.592945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.595227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.604350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.604818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.605050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.605079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.605111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.605268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.605414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.605438] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.605454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.607892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.616957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.617412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.617677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.617703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.617719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.617918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.618105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.618141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.056 [2024-07-14 04:02:01.618158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.056 [2024-07-14 04:02:01.620645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.056 [2024-07-14 04:02:01.629512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.056 [2024-07-14 04:02:01.629864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.630093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.056 [2024-07-14 04:02:01.630130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.056 [2024-07-14 04:02:01.630149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.056 [2024-07-14 04:02:01.630351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.056 [2024-07-14 04:02:01.630577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.056 [2024-07-14 04:02:01.630603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.630620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.632993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.642316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.642800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.643066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.643098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.643117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.643283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.643434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.643460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.643477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.645754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.654917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.655302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.655598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.655653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.655672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.655819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.655984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.656008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.656025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.658425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.667538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.667913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.668147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.668177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.668201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.668367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.668518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.668544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.668560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.670938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.680058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.680437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.680746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.680807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.680824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.681002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.681226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.681250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.681266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.683522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.692806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.693197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.693422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.693452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.693471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.693673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.693843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.693880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.693899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.696141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.705426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.705797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.706031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.706062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.706082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.706256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.706407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.706433] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.706450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.708858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.718052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.718432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.718708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.718758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.718776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.718992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.719126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.719151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.719168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.721568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.730596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.730986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.731389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.731454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.731473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.731621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.731790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.731815] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.731833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.734065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.743069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.743495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.743754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.743780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.743814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.743981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.744099] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.057 [2024-07-14 04:02:01.744119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.057 [2024-07-14 04:02:01.744133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.057 [2024-07-14 04:02:01.746507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.057 [2024-07-14 04:02:01.755669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.057 [2024-07-14 04:02:01.756025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.756249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.057 [2024-07-14 04:02:01.756277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.057 [2024-07-14 04:02:01.756295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.057 [2024-07-14 04:02:01.756408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.057 [2024-07-14 04:02:01.756577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.756603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.756620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.758862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.767930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.768338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.768529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.768583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.768625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.768792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.768978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.769005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.769022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.771243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.780518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.780972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.781165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.781191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.781207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.781396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.781590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.781616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.781630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.783798] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.792846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.793384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.793712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.793741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.793759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.793936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.794106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.794131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.794148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.796511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.805518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.805909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.806106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.806132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.806148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.806381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.806534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.806559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.806575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.808973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.818056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.818585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.818844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.818886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.818908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.819127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.819317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.819342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.819364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.821786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.830640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.831024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.831372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.831432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.831451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.831635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.831804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.831829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.831846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.834165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.843204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.843625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.843838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.843878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.843899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.844102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.844273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.844298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.844315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.846536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.855864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.856283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.856502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.856527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.856543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.856682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.856942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.856969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.856985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.859338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.868345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.868742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.868956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.868983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.868999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.869191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.869381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.058 [2024-07-14 04:02:01.869407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.058 [2024-07-14 04:02:01.869423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.058 [2024-07-14 04:02:01.871463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.058 [2024-07-14 04:02:01.880930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.058 [2024-07-14 04:02:01.881272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.881539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.058 [2024-07-14 04:02:01.881593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.058 [2024-07-14 04:02:01.881611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.058 [2024-07-14 04:02:01.881759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.058 [2024-07-14 04:02:01.881905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.881930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.881947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.884383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.893534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.893878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.894079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.894109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.894127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.894257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.894426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.894452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.894468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.896954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.905977] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.906364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.906737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.906788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.906807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.906986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.907119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.907144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.907160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.909511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.918573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.918976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.919318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.919357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.919372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.919551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.919703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.919727] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.919743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.922100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.930966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.931354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.931609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.931636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.931668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.931874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.932053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.932075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.932089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.934113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.943772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.944137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.944502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.944552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.944571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.944719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.944921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.944943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.944958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.947316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.956475] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.956859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.957074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.957101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.957118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.957263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.957466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.957491] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.957507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.959902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.969179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.969650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.969847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.969883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.969925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.970100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.970236] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.970261] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.970277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.059 [2024-07-14 04:02:01.972374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.059 [2024-07-14 04:02:01.982080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.059 [2024-07-14 04:02:01.982514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.982721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.059 [2024-07-14 04:02:01.982758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.059 [2024-07-14 04:02:01.982778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.059 [2024-07-14 04:02:01.982984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.059 [2024-07-14 04:02:01.983120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.059 [2024-07-14 04:02:01.983145] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.059 [2024-07-14 04:02:01.983161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.318 [2024-07-14 04:02:01.985787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.318 [2024-07-14 04:02:01.994700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.318 [2024-07-14 04:02:01.995120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:01.995398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:01.995450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:01.995469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:01.995653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:01.995823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:01.995849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:01.995873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:01.998136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.007385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.007890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.008141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.008168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.008189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.008427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.008599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.008624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.008641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.010970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.020202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.020571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.020828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.020858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.020895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.021071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.021259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.021284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.021300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.023448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.032901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.033224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.033588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.033638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.033656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.033840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.033982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.034008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.034024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.036449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.045507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.045943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.046148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.046178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.046197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.046382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.046551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.046575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.046592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.048974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.058129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.058536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.058908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.058938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.058957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.059146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.059335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.059360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.059376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.061632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.070945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.071338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.071567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.071597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.071616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.071818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.072016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.072042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.072058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.074296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.083634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.084028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.084262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.084292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.084311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.084477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.084648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.084673] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.084690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.086993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.096259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.096796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.097053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.097083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.097102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.097286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.097480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.097506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.097523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.319 [2024-07-14 04:02:02.099856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.319 [2024-07-14 04:02:02.108801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.319 [2024-07-14 04:02:02.109405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.109775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.319 [2024-07-14 04:02:02.109832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.319 [2024-07-14 04:02:02.109851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.319 [2024-07-14 04:02:02.110026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.319 [2024-07-14 04:02:02.110215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.319 [2024-07-14 04:02:02.110241] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.319 [2024-07-14 04:02:02.110257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.112370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.121281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.121613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.121780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.121822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.121842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.122018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.122188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.122214] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.122230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.124779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.133642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.134028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.134232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.134261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.134280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.134463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.134634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.134665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.134682] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.136948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.146173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.146565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.146839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.146878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.146900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.147048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.147272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.147298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.147315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.149644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.158953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.159397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.159599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.159629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.159647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.159880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.160051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.160076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.160093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.162296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.171607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.171998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.172215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.172242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.172259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.172445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.172641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.172667] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.172689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.175030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.183985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.184402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.184687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.184713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.184744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.184929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.185100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.185124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.185140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.187430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.196602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.196925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.197133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.197163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.197181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.197348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.197536] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.197560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.197577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.199781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.209171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.209556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.209777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.209803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.209819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.210030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.210236] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.210261] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.210278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.212650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.221882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.222309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.222663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.222715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.222733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.222864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.223024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.223049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.223065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.320 [2024-07-14 04:02:02.225443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.320 [2024-07-14 04:02:02.234531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.320 [2024-07-14 04:02:02.234918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.235122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.320 [2024-07-14 04:02:02.235162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.320 [2024-07-14 04:02:02.235180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.320 [2024-07-14 04:02:02.235311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.320 [2024-07-14 04:02:02.235498] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.320 [2024-07-14 04:02:02.235524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.320 [2024-07-14 04:02:02.235540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.321 [2024-07-14 04:02:02.237782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.321 [2024-07-14 04:02:02.247017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.321 [2024-07-14 04:02:02.247540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.321 [2024-07-14 04:02:02.247962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.321 [2024-07-14 04:02:02.247992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.321 [2024-07-14 04:02:02.248011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.321 [2024-07-14 04:02:02.248158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.321 [2024-07-14 04:02:02.248310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.321 [2024-07-14 04:02:02.248334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.321 [2024-07-14 04:02:02.248350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.321 [2024-07-14 04:02:02.250733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.259491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.259893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.260107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.260148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.260167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.260351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.260486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.260511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.260527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.262948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.272191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.272524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.272797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.272827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.272846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.273047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.273200] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.273225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.273241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.275503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.284648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.285045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.285218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.285244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.285261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.285473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.285644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.285668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.285685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.288117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.297126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.297508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.297862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.297935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.297953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.298117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.298269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.298294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.298310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.300764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.309854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.310291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.310489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.310518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.310537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.310684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.310883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.310919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.310935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.313138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.322413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.322893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.323088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.323115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.323131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.323365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.323552] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.323577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.323593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.325850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.334952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.335526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.335933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.335963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.335981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.336165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.336371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.336395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.336411] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.338668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.347379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.347781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.347990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.348023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.348042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.348208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.348432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.348457] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-14 04:02:02.348473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-14 04:02:02.350748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-14 04:02:02.359932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-14 04:02:02.360332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.360618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-14 04:02:02.360668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-14 04:02:02.360686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.580 [2024-07-14 04:02:02.360833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.580 [2024-07-14 04:02:02.360995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-14 04:02:02.361020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.361037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.363328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.372603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.372975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.373163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.373204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.373225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.373335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.373533] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.373557] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.373574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.375915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.385148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.385494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.385728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.385758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.385776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.385990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.386178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.386203] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.386219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.388604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.397783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.398149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.398409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.398438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.398456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.398621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.398773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.398797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.398813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.401186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.410390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.410733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.410937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.410967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.410986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.411121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.411309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.411333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.411349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.413641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.422885] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.423328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.423731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.423791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.423814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.423992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.424145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.424169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.424185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.426568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.435562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.436015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.436362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.436412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.436431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.436614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.436748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.436772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.436788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.439124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.448081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.448493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.448932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.448962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.448989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.449209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.449384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.449409] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.449425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.451681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.460533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.460991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.461237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.461267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.461285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.461487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.461692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.461717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.461733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.464106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.473066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.473475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.473791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.473843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.473861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.474039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.474226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-14 04:02:02.474250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-14 04:02:02.474265] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-14 04:02:02.476719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-14 04:02:02.485698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-14 04:02:02.486109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.486314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-14 04:02:02.486339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-14 04:02:02.486355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.581 [2024-07-14 04:02:02.486581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.581 [2024-07-14 04:02:02.486755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.582 [2024-07-14 04:02:02.486785] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.582 [2024-07-14 04:02:02.486802] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.582 [2024-07-14 04:02:02.489304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.582 [2024-07-14 04:02:02.498382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.582 [2024-07-14 04:02:02.498808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.582 [2024-07-14 04:02:02.499012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.582 [2024-07-14 04:02:02.499043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.582 [2024-07-14 04:02:02.499062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.582 [2024-07-14 04:02:02.499228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.582 [2024-07-14 04:02:02.499361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.582 [2024-07-14 04:02:02.499385] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.582 [2024-07-14 04:02:02.499401] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.582 [2024-07-14 04:02:02.501532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.582 [2024-07-14 04:02:02.510949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.582 [2024-07-14 04:02:02.511329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.582 [2024-07-14 04:02:02.511568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.582 [2024-07-14 04:02:02.511596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.582 [2024-07-14 04:02:02.511615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.582 [2024-07-14 04:02:02.511744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.582 [2024-07-14 04:02:02.511924] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.582 [2024-07-14 04:02:02.511949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.582 [2024-07-14 04:02:02.511966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.582 [2024-07-14 04:02:02.514408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.841 [2024-07-14 04:02:02.523625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.841 [2024-07-14 04:02:02.524046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.841 [2024-07-14 04:02:02.524320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.841 [2024-07-14 04:02:02.524373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.841 [2024-07-14 04:02:02.524392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.841 [2024-07-14 04:02:02.524539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.841 [2024-07-14 04:02:02.524711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.841 [2024-07-14 04:02:02.524735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.841 [2024-07-14 04:02:02.524757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.841 [2024-07-14 04:02:02.527008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.841 [2024-07-14 04:02:02.536064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.841 [2024-07-14 04:02:02.536487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.841 [2024-07-14 04:02:02.536711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.536754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-14 04:02:02.536773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.842 [2024-07-14 04:02:02.536947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.842 [2024-07-14 04:02:02.537117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-14 04:02:02.537142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-14 04:02:02.537159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-14 04:02:02.539356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-14 04:02:02.548705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-14 04:02:02.549097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.549308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.549333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-14 04:02:02.549348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.842 [2024-07-14 04:02:02.549486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.842 [2024-07-14 04:02:02.549639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-14 04:02:02.549663] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-14 04:02:02.549679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-14 04:02:02.551825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-14 04:02:02.561418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-14 04:02:02.561781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.561981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.562011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-14 04:02:02.562029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.842 [2024-07-14 04:02:02.562193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.842 [2024-07-14 04:02:02.562345] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-14 04:02:02.562369] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-14 04:02:02.562385] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-14 04:02:02.564794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-14 04:02:02.573921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-14 04:02:02.574344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.574654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.574694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-14 04:02:02.574710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.842 [2024-07-14 04:02:02.574860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.842 [2024-07-14 04:02:02.575056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-14 04:02:02.575081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-14 04:02:02.575097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-14 04:02:02.577424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-14 04:02:02.586646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-14 04:02:02.587026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.587377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.587433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-14 04:02:02.587451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.842 [2024-07-14 04:02:02.587562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.842 [2024-07-14 04:02:02.587731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-14 04:02:02.587756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-14 04:02:02.587773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-14 04:02:02.589946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-14 04:02:02.599203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-14 04:02:02.599587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.599816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.599845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-14 04:02:02.599863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.842 [2024-07-14 04:02:02.600059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.842 [2024-07-14 04:02:02.600229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-14 04:02:02.600253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-14 04:02:02.600269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-14 04:02:02.602507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-14 04:02:02.611774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-14 04:02:02.612114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.612349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-14 04:02:02.612395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-14 04:02:02.612413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.842 [2024-07-14 04:02:02.612597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.843 [2024-07-14 04:02:02.612766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-14 04:02:02.612790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-14 04:02:02.612807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-14 04:02:02.614999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-14 04:02:02.624441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-14 04:02:02.624939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.625150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.625179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-14 04:02:02.625197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.843 [2024-07-14 04:02:02.625350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.843 [2024-07-14 04:02:02.625501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-14 04:02:02.625525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-14 04:02:02.625541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-14 04:02:02.627891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-14 04:02:02.637036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-14 04:02:02.637510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.637718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.637747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-14 04:02:02.637766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.843 [2024-07-14 04:02:02.637924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.843 [2024-07-14 04:02:02.638094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-14 04:02:02.638118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-14 04:02:02.638134] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-14 04:02:02.640338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-14 04:02:02.649631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-14 04:02:02.649992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.650186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.650218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-14 04:02:02.650255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.843 [2024-07-14 04:02:02.650439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.843 [2024-07-14 04:02:02.650627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-14 04:02:02.650657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-14 04:02:02.650673] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-14 04:02:02.652947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-14 04:02:02.662249] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-14 04:02:02.662753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.662983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.663013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-14 04:02:02.663031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.843 [2024-07-14 04:02:02.663179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.843 [2024-07-14 04:02:02.663312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-14 04:02:02.663336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-14 04:02:02.663353] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-14 04:02:02.665724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-14 04:02:02.674769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-14 04:02:02.675121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.675350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.675397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-14 04:02:02.675415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.843 [2024-07-14 04:02:02.675563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.843 [2024-07-14 04:02:02.675749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-14 04:02:02.675774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-14 04:02:02.675790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-14 04:02:02.678109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-14 04:02:02.687480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-14 04:02:02.687884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.688094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.688123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-14 04:02:02.688141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.843 [2024-07-14 04:02:02.688306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.843 [2024-07-14 04:02:02.688475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-14 04:02:02.688502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-14 04:02:02.688519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-14 04:02:02.690923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-14 04:02:02.700104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-14 04:02:02.700525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-14 04:02:02.700726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.700755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.844 [2024-07-14 04:02:02.700773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.844 [2024-07-14 04:02:02.700931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.844 [2024-07-14 04:02:02.701137] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.844 [2024-07-14 04:02:02.701162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.844 [2024-07-14 04:02:02.701178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.844 [2024-07-14 04:02:02.703486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.844 [2024-07-14 04:02:02.712725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.844 [2024-07-14 04:02:02.713131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.713374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.713403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.844 [2024-07-14 04:02:02.713421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.844 [2024-07-14 04:02:02.713603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.844 [2024-07-14 04:02:02.713772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.844 [2024-07-14 04:02:02.713797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.844 [2024-07-14 04:02:02.713813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.844 [2024-07-14 04:02:02.716133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.844 [2024-07-14 04:02:02.725488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.844 [2024-07-14 04:02:02.725935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.726134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.726164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.844 [2024-07-14 04:02:02.726187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.844 [2024-07-14 04:02:02.726407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.844 [2024-07-14 04:02:02.726595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.844 [2024-07-14 04:02:02.726620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.844 [2024-07-14 04:02:02.726636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.844 [2024-07-14 04:02:02.728855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.844 [2024-07-14 04:02:02.738051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.844 [2024-07-14 04:02:02.738473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.738644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.738673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.844 [2024-07-14 04:02:02.738691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.844 [2024-07-14 04:02:02.738838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.844 [2024-07-14 04:02:02.739016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.844 [2024-07-14 04:02:02.739042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.844 [2024-07-14 04:02:02.739058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.844 [2024-07-14 04:02:02.741151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.844 [2024-07-14 04:02:02.750720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.844 [2024-07-14 04:02:02.751086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.751306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.751335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.844 [2024-07-14 04:02:02.751353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.844 [2024-07-14 04:02:02.751537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.844 [2024-07-14 04:02:02.751707] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.844 [2024-07-14 04:02:02.751732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.844 [2024-07-14 04:02:02.751748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.844 [2024-07-14 04:02:02.753956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.844 [2024-07-14 04:02:02.763406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.844 [2024-07-14 04:02:02.763800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.764042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.764091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.844 [2024-07-14 04:02:02.764110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.844 [2024-07-14 04:02:02.764300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.844 [2024-07-14 04:02:02.764487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.844 [2024-07-14 04:02:02.764512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.844 [2024-07-14 04:02:02.764528] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.844 [2024-07-14 04:02:02.766856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.844 [2024-07-14 04:02:02.776010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.844 [2024-07-14 04:02:02.776393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.776627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.844 [2024-07-14 04:02:02.776674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:43.844 [2024-07-14 04:02:02.776692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:43.844 [2024-07-14 04:02:02.776886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:43.844 [2024-07-14 04:02:02.777092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.844 [2024-07-14 04:02:02.777117] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.844 [2024-07-14 04:02:02.777133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.845 [2024-07-14 04:02:02.779576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.103 [2024-07-14 04:02:02.788639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.103 [2024-07-14 04:02:02.788969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.789195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.789242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.789261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.789480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.789650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.789675] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.789691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.791859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.801154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.801558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.801766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.801795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.801813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.801992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.802126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.802147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.802161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.804538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.813533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.813899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.814087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.814113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.814130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.814271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.814441] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.814465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.814490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.816690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.826172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.826567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.826805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.826834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.826852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.827030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.827201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.827225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.827241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.829369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.838422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.838833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.839054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.839080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.839096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.839276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.839452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.839476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.839493] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.841714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.850933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.851319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.851559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.851601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.851617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.851799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.852034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.852059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.852076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.854384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.863463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.863842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.864035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.864065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.864083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.864247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.864417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.864441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.864458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.866679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.876107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.876464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.876688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.876725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.876744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.876956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.877090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.877114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.877139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.879450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.888914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.889321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.889555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.889596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.889613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.889766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.890000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.890026] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.890043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.892261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-14 04:02:02.901588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-14 04:02:02.901973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.902350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-14 04:02:02.902405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-14 04:02:02.902423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.104 [2024-07-14 04:02:02.902606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.104 [2024-07-14 04:02:02.902794] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-14 04:02:02.902818] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-14 04:02:02.902834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-14 04:02:02.905262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:02.913964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:02.914387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.914707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.914761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:02.914779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:02.914955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:02.915089] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:02.915112] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:02.915133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:02.917535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:02.926590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:02.927031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.927269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.927298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:02.927316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:02.927446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:02.927616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:02.927641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:02.927657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:02.929750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:02.939316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:02.939721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.939946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.939989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:02.940008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:02.940173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:02.940332] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:02.940356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:02.940372] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:02.942826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:02.951786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:02.952226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.952495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.952521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:02.952537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:02.952700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:02.952852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:02.952888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:02.952905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:02.955233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:02.964570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:02.964931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.965195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.965224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:02.965242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:02.965407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:02.965541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:02.965565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:02.965581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:02.967956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:02.977084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:02.977458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.977765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.977826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:02.977845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:02.978002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:02.978173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:02.978197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:02.978213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:02.980488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:02.989646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:02.990054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.990280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:02.990327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:02.990345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:02.990492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:02.990662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:02.990686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:02.990702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:02.993254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:03.002006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:03.002370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:03.002615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:03.002641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:03.002657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:03.002857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:03.003037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:03.003061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:03.003078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:03.005043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:03.014618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:03.014988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:03.015233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:03.015287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:03.015306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:03.015417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:03.015586] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-14 04:02:03.015610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-14 04:02:03.015626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-14 04:02:03.018062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-14 04:02:03.027131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-14 04:02:03.027510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:03.027725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-14 04:02:03.027752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-14 04:02:03.027767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.105 [2024-07-14 04:02:03.027977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.105 [2024-07-14 04:02:03.028206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.106 [2024-07-14 04:02:03.028231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.106 [2024-07-14 04:02:03.028247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.106 [2024-07-14 04:02:03.030485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.106 [2024-07-14 04:02:03.039664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.106 [2024-07-14 04:02:03.040099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.106 [2024-07-14 04:02:03.040367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.106 [2024-07-14 04:02:03.040415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.106 [2024-07-14 04:02:03.040434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.106 [2024-07-14 04:02:03.040583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.106 [2024-07-14 04:02:03.040753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.106 [2024-07-14 04:02:03.040778] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.106 [2024-07-14 04:02:03.040794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.043167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.052195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.052577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.052841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.052879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.052901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.053067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.053273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.053298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.053313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.055627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.064627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.064988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.065220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.065249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.065268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.065416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.065603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.065627] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.065643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.068075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.077393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.077953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.078167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.078196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.078220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.078368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.078519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.078543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.078560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.080811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.089923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.090326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.090541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.090565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.090582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.090725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.090925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.090950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.090967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.093395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.102232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.102719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.102976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.103006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.103025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.103192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.103362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.103387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.103403] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.105881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.114681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.115102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.115399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.115428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.115453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.115601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.115789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.115814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.115831] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.118050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.127198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.127760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.128000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.128030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.128049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.128178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.128403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.128428] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.128443] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.130795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.139806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.140235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.140523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.140552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.140571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.140701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.140852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.367 [2024-07-14 04:02:03.140889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.367 [2024-07-14 04:02:03.140907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.367 [2024-07-14 04:02:03.143259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.367 [2024-07-14 04:02:03.152243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.367 [2024-07-14 04:02:03.152622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.152796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.367 [2024-07-14 04:02:03.152828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.367 [2024-07-14 04:02:03.152847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.367 [2024-07-14 04:02:03.153064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.367 [2024-07-14 04:02:03.153253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.153278] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.153294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.155752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.164732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.165095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.165368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.165397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.165415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.165599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.165732] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.165757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.165773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.167952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.177222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.177783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.178041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.178070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.178089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.178273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.178442] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.178468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.178484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.180835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.189898] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.190331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.190561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.190590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.190609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.190774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.190981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.191007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.191023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.193444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.202454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.202821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.203005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.203035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.203054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.203274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.203462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.203487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.203502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.206062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.215129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.215708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.215994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.216022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.216039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.216199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.216318] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.216343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.216359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.218487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.227770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.228166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.228371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.228401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.228420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.228586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.228737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.228768] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.228785] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.230995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.240394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.240773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.240983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.241014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.241032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.241234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.241405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.241431] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.241446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.243813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.252945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.253526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.253789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.253815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.253832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.254039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.254155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.254181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.254197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.256614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.265535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.265926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.266113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.266139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.266155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.266303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.266487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.266512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.266534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.269017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.278284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.278853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.279175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.279216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.279232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.279428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.279634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.279659] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.279676] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.282039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.290905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.291245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.291438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.291466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.368 [2024-07-14 04:02:03.291483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.368 [2024-07-14 04:02:03.291661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.368 [2024-07-14 04:02:03.291784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.368 [2024-07-14 04:02:03.291803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.368 [2024-07-14 04:02:03.291817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.368 [2024-07-14 04:02:03.294262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.368 [2024-07-14 04:02:03.303413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.368 [2024-07-14 04:02:03.303829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.368 [2024-07-14 04:02:03.304071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.369 [2024-07-14 04:02:03.304109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.369 [2024-07-14 04:02:03.304143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.369 [2024-07-14 04:02:03.304372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.369 [2024-07-14 04:02:03.304527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.369 [2024-07-14 04:02:03.304552] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.369 [2024-07-14 04:02:03.304569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.628 [2024-07-14 04:02:03.307151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.628 [2024-07-14 04:02:03.315974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.628 [2024-07-14 04:02:03.316391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.316614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.316641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.628 [2024-07-14 04:02:03.316674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.628 [2024-07-14 04:02:03.316804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.628 [2024-07-14 04:02:03.317003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.628 [2024-07-14 04:02:03.317029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.628 [2024-07-14 04:02:03.317046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.628 [2024-07-14 04:02:03.319229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.628 [2024-07-14 04:02:03.328511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.628 [2024-07-14 04:02:03.328917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.329096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.329125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.628 [2024-07-14 04:02:03.329144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.628 [2024-07-14 04:02:03.329328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.628 [2024-07-14 04:02:03.329534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.628 [2024-07-14 04:02:03.329559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.628 [2024-07-14 04:02:03.329575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.628 [2024-07-14 04:02:03.331951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.628 [2024-07-14 04:02:03.341113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.628 [2024-07-14 04:02:03.341521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.341925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.341955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.628 [2024-07-14 04:02:03.341974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.628 [2024-07-14 04:02:03.342175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.628 [2024-07-14 04:02:03.342327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.628 [2024-07-14 04:02:03.342352] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.628 [2024-07-14 04:02:03.342369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.628 [2024-07-14 04:02:03.344713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.628 [2024-07-14 04:02:03.353762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.628 [2024-07-14 04:02:03.354132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.354525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.354585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.628 [2024-07-14 04:02:03.354603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.628 [2024-07-14 04:02:03.354787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.628 [2024-07-14 04:02:03.354985] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.628 [2024-07-14 04:02:03.355010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.628 [2024-07-14 04:02:03.355027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.628 [2024-07-14 04:02:03.357448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.628 [2024-07-14 04:02:03.366160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.628 [2024-07-14 04:02:03.366558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.366935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.628 [2024-07-14 04:02:03.366965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.628 [2024-07-14 04:02:03.366984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.367167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.367302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.367327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.367344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.369475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.378736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.379188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.379406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.379431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.379447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.379657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.379823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.379847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.379864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.382121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.391521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.391927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.392151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.392181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.392200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.392420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.392591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.392616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.392633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.394835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.404039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.404406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.404792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.404847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.404873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.405077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.405212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.405235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.405252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.407632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.416696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.417105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.417341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.417367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.417399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.417561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.417750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.417774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.417791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.420235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.429318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.429686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.429880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.429915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.429934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.430135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.430306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.430331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.430348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.432604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.441953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.442526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.442940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.442970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.442989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.443154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.443325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.443350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.443367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.445801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.454593] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.454965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.455174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.455206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.455225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.455392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.455506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.455531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.455548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.457769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.467063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.467434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.467649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.467679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.467706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.467902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.468072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.468096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.468113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.470404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.479694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.480105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.480424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.480478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.480497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.480645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.480778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.480802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.480818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.483290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.492334] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.492891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.493126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.493167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.493183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.493365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.493535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.493561] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.493577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.495775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.504807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.505205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.505613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.505672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.505690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.505844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.506060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.506086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.506103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.508295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.517629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.518057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.518451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.518510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.518528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.518658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.518862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.518896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.518912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.521155] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.530302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.530745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.530968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.530997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.531029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.531189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.531340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.531366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.531382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.533825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.543033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.543402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.543726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.543798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.543816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.543997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.544209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.544235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.544251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.546475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.629 [2024-07-14 04:02:03.555716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.629 [2024-07-14 04:02:03.556135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.556388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.629 [2024-07-14 04:02:03.556431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.629 [2024-07-14 04:02:03.556448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.629 [2024-07-14 04:02:03.556629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.629 [2024-07-14 04:02:03.556798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.629 [2024-07-14 04:02:03.556824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.629 [2024-07-14 04:02:03.556841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.629 [2024-07-14 04:02:03.559108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.891 [2024-07-14 04:02:03.568466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.891 [2024-07-14 04:02:03.568881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.569100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.569128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.891 [2024-07-14 04:02:03.569162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.891 [2024-07-14 04:02:03.569347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.891 [2024-07-14 04:02:03.569498] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.891 [2024-07-14 04:02:03.569523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.891 [2024-07-14 04:02:03.569540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.891 [2024-07-14 04:02:03.571971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.891 [2024-07-14 04:02:03.581316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.891 [2024-07-14 04:02:03.581722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.581930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.581961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.891 [2024-07-14 04:02:03.581979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.891 [2024-07-14 04:02:03.582108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.891 [2024-07-14 04:02:03.582295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.891 [2024-07-14 04:02:03.582327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.891 [2024-07-14 04:02:03.582344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.891 [2024-07-14 04:02:03.584696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.891 [2024-07-14 04:02:03.594025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.891 [2024-07-14 04:02:03.594395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.594755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.594810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.891 [2024-07-14 04:02:03.594828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.891 [2024-07-14 04:02:03.594936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.891 [2024-07-14 04:02:03.595123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.891 [2024-07-14 04:02:03.595148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.891 [2024-07-14 04:02:03.595163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.891 [2024-07-14 04:02:03.597800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.891 [2024-07-14 04:02:03.606471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.891 [2024-07-14 04:02:03.606805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.607033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.607065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.891 [2024-07-14 04:02:03.607083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.891 [2024-07-14 04:02:03.607250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.891 [2024-07-14 04:02:03.607401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.891 [2024-07-14 04:02:03.607426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.891 [2024-07-14 04:02:03.607443] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.891 [2024-07-14 04:02:03.609893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.891 [2024-07-14 04:02:03.619000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.891 [2024-07-14 04:02:03.619456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.619694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.619719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.891 [2024-07-14 04:02:03.619736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.891 [2024-07-14 04:02:03.619927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.891 [2024-07-14 04:02:03.620129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.891 [2024-07-14 04:02:03.620153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.891 [2024-07-14 04:02:03.620174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.891 [2024-07-14 04:02:03.622635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.891 [2024-07-14 04:02:03.631349] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.891 [2024-07-14 04:02:03.631773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.631989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.632020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.891 [2024-07-14 04:02:03.632038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.891 [2024-07-14 04:02:03.632207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.891 [2024-07-14 04:02:03.632376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.891 [2024-07-14 04:02:03.632400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.891 [2024-07-14 04:02:03.632416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.891 [2024-07-14 04:02:03.634763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.891 [2024-07-14 04:02:03.643793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.891 [2024-07-14 04:02:03.644178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.644418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.891 [2024-07-14 04:02:03.644443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.891 [2024-07-14 04:02:03.644458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.891 [2024-07-14 04:02:03.644645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.891 [2024-07-14 04:02:03.644804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.644828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.644844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.647287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.656547] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.656923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.657106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.657134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.657152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.657317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.657504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.657527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.657543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.659842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.669143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.669505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.669686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.669726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.669741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.670002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.670209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.670233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.670248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.672591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.681845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.682291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.682520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.682566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.682584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.682730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.682894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.682919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.682934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.685335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.694479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.694823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.695037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.695067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.695085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.695251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.695457] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.695480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.695496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.697886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.707032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.707418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.707634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.707659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.707675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.707855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.708038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.708062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.708077] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.710283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.719758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.720118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.720344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.720372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.720389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.720535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.720686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.720709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.720724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.723006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.732471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.732859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.733066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.733094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.733112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.733277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.733446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.733469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.733484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.735739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.745050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.745385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.745630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.745655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.745671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.745826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.746020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.746045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.746060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.748169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.757709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.758133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.758427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.758474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.758493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.758639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.758772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.758795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.758810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.761166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.770384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.770821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.771006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.771035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.771053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.771217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.771367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.771390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.771405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.773804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.783043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.783590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.783924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.783956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.783973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.784173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.784324] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.784347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.784362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.786529] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.795358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.795914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.796142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.796167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.796183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.796348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.796535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.796559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.796574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.798781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.807621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.807978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.808203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.808253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.808271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.808400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.808568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.808592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.808608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.810874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.892 [2024-07-14 04:02:03.820222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.892 [2024-07-14 04:02:03.820690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.820952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.892 [2024-07-14 04:02:03.820982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:44.892 [2024-07-14 04:02:03.821005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:44.892 [2024-07-14 04:02:03.821171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:44.892 [2024-07-14 04:02:03.821358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.892 [2024-07-14 04:02:03.821381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.892 [2024-07-14 04:02:03.821396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.892 [2024-07-14 04:02:03.823931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.832891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.833366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.833701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.833763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.833796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.834047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.834219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.834243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.834259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.836594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.845189] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.845539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.845743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.845771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.845789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.845942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.846137] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.846174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.846189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.848571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.857692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.858122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.858386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.858414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.858432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.858584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.858789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.858813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.858829] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.861059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.870296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.870643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.870825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.870856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.870885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.871034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.871222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.871245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.871260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.873572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.882953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.883338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.883581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.883636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.883653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.883836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.884015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.884040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.884056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.886130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.895294] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.895700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.895902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.895931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.895949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.896096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.896252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.896276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.896291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.898527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.907994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.908377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.908659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.908705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.908722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.908851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.909054] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.909078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.909094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.911347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.920875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.921202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.921397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.921423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.921438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.921671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.921821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.921845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.921860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.924179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.933232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.933586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.933963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.933993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.934010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.934193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.934344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.934373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.934389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.936572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.152 [2024-07-14 04:02:03.945898] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.152 [2024-07-14 04:02:03.946318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.946605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.152 [2024-07-14 04:02:03.946651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.152 [2024-07-14 04:02:03.946669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.152 [2024-07-14 04:02:03.946815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.152 [2024-07-14 04:02:03.947030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.152 [2024-07-14 04:02:03.947054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.152 [2024-07-14 04:02:03.947070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.152 [2024-07-14 04:02:03.949272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:03.958355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:03.958719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.958923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.958953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:03.958971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:03.959135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:03.959231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:03.959253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:03.959269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:03.961413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:03.970860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:03.971251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.971597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.971652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:03.971670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:03.971852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:03.972068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:03.972092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:03.972113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:03.974298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:03.983755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:03.984131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.984471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.984521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:03.984539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:03.984703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:03.984853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:03.984888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:03.984905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:03.987272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:03.996270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:03.996610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.996839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:03.996872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:03.996890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:03.997044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:03.997212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:03.997236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:03.997251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:03.999686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:04.008944] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:04.009293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.009690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.009748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:04.009766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:04.009942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:04.010110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:04.010134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:04.010149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:04.012681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:04.021376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:04.021916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.022154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.022183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:04.022201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:04.022383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:04.022552] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:04.022575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:04.022590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:04.024737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:04.033987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:04.034327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.034568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.034614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:04.034632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:04.034797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:04.034995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:04.035019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:04.035035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:04.037275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:04.046441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:04.046843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.047054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.047083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:04.047101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:04.047265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:04.047434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:04.047457] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:04.047473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:04.050007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:04.058687] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:04.059087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.059297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.059343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:04.059361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:04.059544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:04.059713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:04.059736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:04.059751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:04.061996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.153 [2024-07-14 04:02:04.071288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.153 [2024-07-14 04:02:04.071770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.071936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.153 [2024-07-14 04:02:04.071966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.153 [2024-07-14 04:02:04.071984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.153 [2024-07-14 04:02:04.072185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.153 [2024-07-14 04:02:04.072354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.153 [2024-07-14 04:02:04.072377] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.153 [2024-07-14 04:02:04.072393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.153 [2024-07-14 04:02:04.074794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.154 [2024-07-14 04:02:04.083917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.154 [2024-07-14 04:02:04.084466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.154 [2024-07-14 04:02:04.084712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.154 [2024-07-14 04:02:04.084741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.154 [2024-07-14 04:02:04.084759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.154 [2024-07-14 04:02:04.084936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.154 [2024-07-14 04:02:04.085107] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.154 [2024-07-14 04:02:04.085140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.154 [2024-07-14 04:02:04.085155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.154 [2024-07-14 04:02:04.087594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.413 [2024-07-14 04:02:04.096760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.413 [2024-07-14 04:02:04.097183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.413 [2024-07-14 04:02:04.097389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.413 [2024-07-14 04:02:04.097416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.413 [2024-07-14 04:02:04.097432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.413 [2024-07-14 04:02:04.097611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.413 [2024-07-14 04:02:04.097800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.413 [2024-07-14 04:02:04.097823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.413 [2024-07-14 04:02:04.097839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.413 [2024-07-14 04:02:04.100140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.109462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.109829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.110015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.110044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.110062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.110209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.110414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.110437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.110452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.112693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.121951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.122372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.122580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.122609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.122626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.122773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.122988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.123013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.123028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.125245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.134285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.134708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.134925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.134960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.134979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.135162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.135348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.135372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.135388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.137717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.146916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.147277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.147622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.147678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.147696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.147861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.148044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.148068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.148083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.150448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.159606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.160039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.160243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.160271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.160288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.160399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.160585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.160609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.160624] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.162905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.172284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.172698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.172919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.172948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.172971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.173119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.173270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.173293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.173309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.175547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.184731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.185124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.185351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.185379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.185397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.185562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.185713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.185736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.185752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.188214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.197123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.197507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.197899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.197928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.197946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.198074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.198243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.198267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.198282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.200392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.209784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.210159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.210477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.210527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.210544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.210696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.210893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.210918] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.210934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.414 [2024-07-14 04:02:04.213152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.414 [2024-07-14 04:02:04.222440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.414 [2024-07-14 04:02:04.222969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.223136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.414 [2024-07-14 04:02:04.223165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.414 [2024-07-14 04:02:04.223182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.414 [2024-07-14 04:02:04.223365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.414 [2024-07-14 04:02:04.223552] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.414 [2024-07-14 04:02:04.223576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.414 [2024-07-14 04:02:04.223591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.225978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.235181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.235596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.235838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.235874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.235895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.236096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.236247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.236271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.236287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.238472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.247980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.248354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.248595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.248644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.248662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.248826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.248992] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.249017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.249033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.251378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.260570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.261039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.261294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.261323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.261341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.261541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.261728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.261751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.261767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2506755 Killed "${NVMF_APP[@]}" "$@" 00:29:45.415 04:02:04 -- host/bdevperf.sh@36 -- # tgt_init 00:29:45.415 04:02:04 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:45.415 04:02:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:45.415 04:02:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:45.415 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:29:45.415 [2024-07-14 04:02:04.264011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 04:02:04 -- nvmf/common.sh@469 -- # nvmfpid=2507794 00:29:45.415 04:02:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:45.415 04:02:04 -- nvmf/common.sh@470 -- # waitforlisten 2507794 00:29:45.415 04:02:04 -- common/autotest_common.sh@819 -- # '[' -z 2507794 ']' 00:29:45.415 04:02:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.415 04:02:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:45.415 04:02:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.415 04:02:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:45.415 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:29:45.415 [2024-07-14 04:02:04.273197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.273611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.273816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.273844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.273861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.274055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.274223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.274247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.274268] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.276653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.285699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.286066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.286245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.286270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.286286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.286435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.286559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.286580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.286595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.288748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.298096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.298482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.298674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.298700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.298717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.298912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.299065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.299087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.299102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.301338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.310471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.310824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.311023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.311050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.311066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.311214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.311363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.311384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.311398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.311804] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:45.415 [2024-07-14 04:02:04.311899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.415 [2024-07-14 04:02:04.313595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.322877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.323287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.323488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.323514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.323530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.415 [2024-07-14 04:02:04.323699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.415 [2024-07-14 04:02:04.323844] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.415 [2024-07-14 04:02:04.323875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.415 [2024-07-14 04:02:04.323906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.415 [2024-07-14 04:02:04.326270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.415 [2024-07-14 04:02:04.335278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.415 [2024-07-14 04:02:04.335623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.336425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.415 [2024-07-14 04:02:04.336467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.415 [2024-07-14 04:02:04.336482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.416 [2024-07-14 04:02:04.336601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.416 [2024-07-14 04:02:04.336711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.416 [2024-07-14 04:02:04.336731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.416 [2024-07-14 04:02:04.336744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.416 [2024-07-14 04:02:04.338684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.416 [2024-07-14 04:02:04.347716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.416 [2024-07-14 04:02:04.348063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.416 [2024-07-14 04:02:04.348228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.416 [2024-07-14 04:02:04.348254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.416 [2024-07-14 04:02:04.348270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.416 [2024-07-14 04:02:04.348460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.416 [2024-07-14 04:02:04.348648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.416 [2024-07-14 04:02:04.348672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.416 [2024-07-14 04:02:04.348686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.416 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.416 [2024-07-14 04:02:04.350917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.677 [2024-07-14 04:02:04.360210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.677 [2024-07-14 04:02:04.360579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.360789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.360818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.677 [2024-07-14 04:02:04.360836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.677 [2024-07-14 04:02:04.361073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.677 [2024-07-14 04:02:04.361227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.677 [2024-07-14 04:02:04.361251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.677 [2024-07-14 04:02:04.361267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.677 [2024-07-14 04:02:04.363624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.677 [2024-07-14 04:02:04.372650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.677 [2024-07-14 04:02:04.373013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.373198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.373224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.677 [2024-07-14 04:02:04.373240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.677 [2024-07-14 04:02:04.373447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.677 [2024-07-14 04:02:04.373615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.677 [2024-07-14 04:02:04.373639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.677 [2024-07-14 04:02:04.373654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.677 [2024-07-14 04:02:04.375980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.677 [2024-07-14 04:02:04.385138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.677 [2024-07-14 04:02:04.385513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.385673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.385698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.677 [2024-07-14 04:02:04.385714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.677 [2024-07-14 04:02:04.385914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:45.677 [2024-07-14 04:02:04.385944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.677 [2024-07-14 04:02:04.386128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.677 [2024-07-14 04:02:04.386178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.677 [2024-07-14 04:02:04.386193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.677 [2024-07-14 04:02:04.388429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.677 [2024-07-14 04:02:04.398012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.677 [2024-07-14 04:02:04.398610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.398823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.398850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.677 [2024-07-14 04:02:04.398877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.677 [2024-07-14 04:02:04.399005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.677 [2024-07-14 04:02:04.399192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.677 [2024-07-14 04:02:04.399213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.677 [2024-07-14 04:02:04.399246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.677 [2024-07-14 04:02:04.401599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.677 [2024-07-14 04:02:04.410494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.677 [2024-07-14 04:02:04.410888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.411054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.411080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.677 [2024-07-14 04:02:04.411097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.677 [2024-07-14 04:02:04.411282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.677 [2024-07-14 04:02:04.411489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.677 [2024-07-14 04:02:04.411512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.677 [2024-07-14 04:02:04.411529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.677 [2024-07-14 04:02:04.413811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.677 [2024-07-14 04:02:04.423009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.677 [2024-07-14 04:02:04.423490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.423718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.423743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.677 [2024-07-14 04:02:04.423759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.677 [2024-07-14 04:02:04.423950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.677 [2024-07-14 04:02:04.424130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.677 [2024-07-14 04:02:04.424168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.677 [2024-07-14 04:02:04.424194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.677 [2024-07-14 04:02:04.426458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.677 [2024-07-14 04:02:04.435424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.677 [2024-07-14 04:02:04.435928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.436149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.677 [2024-07-14 04:02:04.436176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.677 [2024-07-14 04:02:04.436194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.677 [2024-07-14 04:02:04.436396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.677 [2024-07-14 04:02:04.436568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.677 [2024-07-14 04:02:04.436592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.436610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.438992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.447891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.448377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.448585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.448612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.448632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.448810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.448967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.448988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.449004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.451310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.460557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.461014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.461206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.461232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.461249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.461419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.461589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.461612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.461628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.463909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.473224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.473597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.473858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.473892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.473910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.474075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.474280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.474305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.474321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.476155] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:45.678 [2024-07-14 04:02:04.476302] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.678 [2024-07-14 04:02:04.476320] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.678 [2024-07-14 04:02:04.476333] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.678 [2024-07-14 04:02:04.476388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.678 [2024-07-14 04:02:04.476446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:45.678 [2024-07-14 04:02:04.476448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.678 [2024-07-14 04:02:04.476541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.485580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.486118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.486343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.486369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.486389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.486580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.486732] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.486755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.486773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.488980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.498012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.498546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.498779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.498806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.498827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.498999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.499125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.499158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.499191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.501262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.510432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.510942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.511144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.511180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.511201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.511327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.511513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.511534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.511553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.513706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.522724] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.523295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.523512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.523538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.523559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.523702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.523841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.523885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.523904] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.525986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.535049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.535532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.535725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.535751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.535770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.535930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.536127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.536156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.536188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.538445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.678 [2024-07-14 04:02:04.547399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.678 [2024-07-14 04:02:04.547829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.548053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.678 [2024-07-14 04:02:04.548080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.678 [2024-07-14 04:02:04.548100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.678 [2024-07-14 04:02:04.548307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.678 [2024-07-14 04:02:04.548428] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.678 [2024-07-14 04:02:04.548449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.678 [2024-07-14 04:02:04.548467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.678 [2024-07-14 04:02:04.550431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.679 [2024-07-14 04:02:04.559591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.679 [2024-07-14 04:02:04.559986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.560208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.560234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.679 [2024-07-14 04:02:04.560251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.679 [2024-07-14 04:02:04.560388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.679 [2024-07-14 04:02:04.560492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.679 [2024-07-14 04:02:04.560512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.679 [2024-07-14 04:02:04.560527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.679 [2024-07-14 04:02:04.562603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.679 [2024-07-14 04:02:04.572002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.679 [2024-07-14 04:02:04.572443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.572599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.572625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.679 [2024-07-14 04:02:04.572641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.679 [2024-07-14 04:02:04.572822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.679 [2024-07-14 04:02:04.572950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.679 [2024-07-14 04:02:04.572978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.679 [2024-07-14 04:02:04.572994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.679 [2024-07-14 04:02:04.575121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.679 [2024-07-14 04:02:04.584191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.679 [2024-07-14 04:02:04.584574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.584734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.584760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.679 [2024-07-14 04:02:04.584775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.679 [2024-07-14 04:02:04.584949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.679 [2024-07-14 04:02:04.585085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.679 [2024-07-14 04:02:04.585106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.679 [2024-07-14 04:02:04.585119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.679 [2024-07-14 04:02:04.587083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.679 [2024-07-14 04:02:04.596357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.679 [2024-07-14 04:02:04.596688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.596877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.596903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.679 [2024-07-14 04:02:04.596919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.679 [2024-07-14 04:02:04.597083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.679 [2024-07-14 04:02:04.597264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.679 [2024-07-14 04:02:04.597284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.679 [2024-07-14 04:02:04.597298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.679 [2024-07-14 04:02:04.599323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.679 [2024-07-14 04:02:04.608447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.679 [2024-07-14 04:02:04.608790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.608995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.679 [2024-07-14 04:02:04.609022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.679 [2024-07-14 04:02:04.609039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.679 [2024-07-14 04:02:04.609172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.679 [2024-07-14 04:02:04.609355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.679 [2024-07-14 04:02:04.609377] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.679 [2024-07-14 04:02:04.609396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.679 [2024-07-14 04:02:04.611841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.965 [2024-07-14 04:02:04.620825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.965 [2024-07-14 04:02:04.621233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.965 [2024-07-14 04:02:04.621457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.965 [2024-07-14 04:02:04.621485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.965 [2024-07-14 04:02:04.621502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.965 [2024-07-14 04:02:04.621668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.965 [2024-07-14 04:02:04.621805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.965 [2024-07-14 04:02:04.621826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.965 [2024-07-14 04:02:04.621840] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.965 [2024-07-14 04:02:04.624030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.965 [2024-07-14 04:02:04.633054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.965 [2024-07-14 04:02:04.633424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.965 [2024-07-14 04:02:04.633610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.965 [2024-07-14 04:02:04.633636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.965 [2024-07-14 04:02:04.633652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.965 [2024-07-14 04:02:04.633833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.965 [2024-07-14 04:02:04.633978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.965 [2024-07-14 04:02:04.634000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.965 [2024-07-14 04:02:04.634014] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.965 [2024-07-14 04:02:04.635982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.965 [2024-07-14 04:02:04.645330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.965 [2024-07-14 04:02:04.645649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.965 [2024-07-14 04:02:04.645825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.965 [2024-07-14 04:02:04.645851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.965 [2024-07-14 04:02:04.645877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.965 [2024-07-14 04:02:04.646044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.965 [2024-07-14 04:02:04.646248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.965 [2024-07-14 04:02:04.646269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.965 [2024-07-14 04:02:04.646283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.965 [2024-07-14 04:02:04.648283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.965 [2024-07-14 04:02:04.657581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.965 [2024-07-14 04:02:04.657931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.965 [2024-07-14 04:02:04.658097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.658122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.658138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.658319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.658468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.658489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.658502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.660603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.670043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.670432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.670608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.670633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.670649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.670812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.670988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.671010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.671024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.673112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.682211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.682547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.682726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.682752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.682768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.682924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.683077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.683098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.683112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.685215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.694650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.694962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.695172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.695198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.695214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.695409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.695572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.695608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.695622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.697611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.706954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.707317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.707504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.707530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.707546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.707678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.707840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.707861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.707899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.709965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.719215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.719550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.719701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.719727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.719742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.719899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.720035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.720056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.720070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.722033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.731557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.731922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.732101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.732127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.732143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.732322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.732485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.732506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.732519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.734578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.743836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.744164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.744336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.744362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.744377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.744524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.744656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.744676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.744690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.746793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.756087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.756477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.756631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.756656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.756672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.756821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.756980] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.757002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.757017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.759028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.768351] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.966 [2024-07-14 04:02:04.768712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.768896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.966 [2024-07-14 04:02:04.768928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.966 [2024-07-14 04:02:04.768945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.966 [2024-07-14 04:02:04.769126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.966 [2024-07-14 04:02:04.769274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.966 [2024-07-14 04:02:04.769295] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.966 [2024-07-14 04:02:04.769309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.966 [2024-07-14 04:02:04.771315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.966 [2024-07-14 04:02:04.780625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.780981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.781133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.781158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.781174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.781369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.781564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.781584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.781598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.783669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.792858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.793225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.793379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.793404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.793420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.793552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.793719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.793740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.793753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.795795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.805101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.805483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.805646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.805672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.805692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.805896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.806066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.806088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.806102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.808063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.817297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.817647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.817823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.817848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.817872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.818023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.818190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.818211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.818225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.820293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.829712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.830074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.830227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.830253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.830269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.830368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.830531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.830551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.830564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.832607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.842023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.842378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.842534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.842561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.842577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.842731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.842919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.842940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.842954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.845031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.854264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.854615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.854788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.854813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.854829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.854985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.855122] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.855143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.855157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.857156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.866569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.866871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.867059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.867084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.867100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.867297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.867508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.867529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.867542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.869565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.878757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.879160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.879338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.879364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.879380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.879528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.879702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.879723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.879736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.967 [2024-07-14 04:02:04.881967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.967 [2024-07-14 04:02:04.890962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.967 [2024-07-14 04:02:04.891274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.891426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.967 [2024-07-14 04:02:04.891452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:45.967 [2024-07-14 04:02:04.891467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:45.967 [2024-07-14 04:02:04.891615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:45.967 [2024-07-14 04:02:04.891778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.967 [2024-07-14 04:02:04.891799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.967 [2024-07-14 04:02:04.891812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.968 [2024-07-14 04:02:04.893970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.227 [2024-07-14 04:02:04.903565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.227 [2024-07-14 04:02:04.903909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.904089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.904115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.227 [2024-07-14 04:02:04.904131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.227 [2024-07-14 04:02:04.904328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.227 [2024-07-14 04:02:04.904511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.227 [2024-07-14 04:02:04.904532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.227 [2024-07-14 04:02:04.904546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.227 [2024-07-14 04:02:04.906608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.227 [2024-07-14 04:02:04.915844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.227 [2024-07-14 04:02:04.916187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.916385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.916410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.227 [2024-07-14 04:02:04.916426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.227 [2024-07-14 04:02:04.916589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.227 [2024-07-14 04:02:04.916752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.227 [2024-07-14 04:02:04.916777] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.227 [2024-07-14 04:02:04.916791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.227 [2024-07-14 04:02:04.918651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.227 [2024-07-14 04:02:04.928258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.227 [2024-07-14 04:02:04.928598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.928764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.928789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.227 [2024-07-14 04:02:04.928805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.227 [2024-07-14 04:02:04.928963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.227 [2024-07-14 04:02:04.929131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.227 [2024-07-14 04:02:04.929152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.227 [2024-07-14 04:02:04.929180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.227 [2024-07-14 04:02:04.931225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.227 [2024-07-14 04:02:04.940356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.227 [2024-07-14 04:02:04.940668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.940844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.227 [2024-07-14 04:02:04.940876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.227 [2024-07-14 04:02:04.940894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:04.941027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:04.941225] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:04.941246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:04.941260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:04.943420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:04.952575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:04.952895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.953097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.953122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:04.953138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:04.953287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:04.953466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:04.953486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:04.953504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:04.955515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:04.965032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:04.965374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.965557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.965582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:04.965598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:04.965747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:04.965922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:04.965944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:04.965958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:04.968030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:04.977328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:04.977705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.977872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.977897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:04.977912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:04.978077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:04.978227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:04.978248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:04.978261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:04.980428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:04.989569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:04.989930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.990088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:04.990114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:04.990130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:04.990263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:04.990395] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:04.990416] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:04.990429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:04.992463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:05.001910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:05.002266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.002425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.002451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:05.002466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:05.002632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:05.002798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:05.002819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:05.002832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:05.004839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:05.014389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:05.014736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.014899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.014926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:05.014942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:05.015091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:05.015241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:05.015262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:05.015276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:05.017346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:05.026597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:05.026958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.027144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.027170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:05.027186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:05.027382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:05.027529] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:05.027550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:05.027564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:05.029685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:05.038863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:05.039263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.039454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.039480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:05.039496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:05.039628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:05.039778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:05.039799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:05.039812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:05.041964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:05.051105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:05.051433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.051603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.051629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:05.051645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.228 [2024-07-14 04:02:05.051838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.228 [2024-07-14 04:02:05.052046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.228 [2024-07-14 04:02:05.052068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.228 [2024-07-14 04:02:05.052081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.228 [2024-07-14 04:02:05.054170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.228 [2024-07-14 04:02:05.063369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.228 [2024-07-14 04:02:05.063792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.063956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.228 [2024-07-14 04:02:05.063982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.228 [2024-07-14 04:02:05.063998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.064097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.064278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.064299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.064312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.066368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.075658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.076017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.076172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.076196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.076211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.076293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.076458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.076479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.076493] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.078626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.088082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.088463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.088610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.088635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.088651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.088767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.088975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.088997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.089011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.091099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.100683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.101100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.101269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.101296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.101312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.101478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.101649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.101673] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.101687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.103945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.112987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.113356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.113558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.113584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.113600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.113797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.113975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.113997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.114011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.116142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.125391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.125719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.125899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.125926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.125942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.126058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.126239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.126260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.126273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.128255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.137574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.137934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.138120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.138147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.138164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.138297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.138445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.138465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.138479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.140595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.150056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.150463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.150667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.150693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.150714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.150889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.151042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.151063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.151077] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.152996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.229 [2024-07-14 04:02:05.162579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.229 [2024-07-14 04:02:05.162923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.163087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.229 [2024-07-14 04:02:05.163115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.229 [2024-07-14 04:02:05.163132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.229 [2024-07-14 04:02:05.163281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.229 [2024-07-14 04:02:05.163494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.229 [2024-07-14 04:02:05.163517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.229 [2024-07-14 04:02:05.163531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.229 [2024-07-14 04:02:05.165692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 [2024-07-14 04:02:05.174947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.175223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.175377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.175403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.175419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.175536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.175717] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.175738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.489 [2024-07-14 04:02:05.175751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.489 [2024-07-14 04:02:05.177716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 [2024-07-14 04:02:05.187256] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.187572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.187755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.187781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.187797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.187959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.188096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.188118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.489 [2024-07-14 04:02:05.188132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.489 [2024-07-14 04:02:05.190191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 [2024-07-14 04:02:05.199618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.199944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.200119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.200145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.200161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.200309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.200475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.200496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.489 [2024-07-14 04:02:05.200510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.489 [2024-07-14 04:02:05.202459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 [2024-07-14 04:02:05.211962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.212337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.212516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.212541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.212557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.212737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.212897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.212919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.489 [2024-07-14 04:02:05.212933] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.489 [2024-07-14 04:02:05.214992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 [2024-07-14 04:02:05.224375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.224766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.224945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.224971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.224987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.225102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.225224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.225245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.489 [2024-07-14 04:02:05.225258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.489 [2024-07-14 04:02:05.227423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 [2024-07-14 04:02:05.236790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.237118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.237298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.237323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.237339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.237472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.237624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.237644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.489 [2024-07-14 04:02:05.237659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.489 [2024-07-14 04:02:05.239638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 04:02:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:46.489 04:02:05 -- common/autotest_common.sh@852 -- # return 0 00:29:46.489 04:02:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:46.489 04:02:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:46.489 04:02:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.489 [2024-07-14 04:02:05.249127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.249477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.249627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.249652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.249668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.249800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.249979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.250001] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.489 [2024-07-14 04:02:05.250015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.489 [2024-07-14 04:02:05.252185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.489 [2024-07-14 04:02:05.261562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.489 [2024-07-14 04:02:05.262012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.262179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-14 04:02:05.262205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.489 [2024-07-14 04:02:05.262220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.489 [2024-07-14 04:02:05.262406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.489 [2024-07-14 04:02:05.262571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.489 [2024-07-14 04:02:05.262592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.262606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 [2024-07-14 04:02:05.264741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 04:02:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.490 [2024-07-14 04:02:05.273906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.490 04:02:05 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.490 [2024-07-14 04:02:05.274201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.274382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 04:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.490 [2024-07-14 04:02:05.274409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.490 [2024-07-14 04:02:05.274440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.490 [2024-07-14 04:02:05.274584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.490 04:02:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.490 [2024-07-14 04:02:05.274717] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.490 [2024-07-14 04:02:05.274739] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.274752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 [2024-07-14 04:02:05.276983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 [2024-07-14 04:02:05.280317] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.490 04:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.490 04:02:05 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:46.490 04:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.490 [2024-07-14 04:02:05.286277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.490 04:02:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.490 [2024-07-14 04:02:05.286598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.286773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.286799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.490 [2024-07-14 04:02:05.286815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.490 [2024-07-14 04:02:05.286988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.490 [2024-07-14 04:02:05.287140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.490 [2024-07-14 04:02:05.287161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.287175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 [2024-07-14 04:02:05.289363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 [2024-07-14 04:02:05.298559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.490 [2024-07-14 04:02:05.298893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.299068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.299094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.490 [2024-07-14 04:02:05.299109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.490 [2024-07-14 04:02:05.299241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.490 [2024-07-14 04:02:05.299434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.490 [2024-07-14 04:02:05.299454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.299467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 [2024-07-14 04:02:05.301591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 [2024-07-14 04:02:05.311129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.490 [2024-07-14 04:02:05.311530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.311699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.311725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.490 [2024-07-14 04:02:05.311742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.490 [2024-07-14 04:02:05.311918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.490 [2024-07-14 04:02:05.312072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.490 [2024-07-14 04:02:05.312093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.312107] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 [2024-07-14 04:02:05.314234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 [2024-07-14 04:02:05.323309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.490 [2024-07-14 04:02:05.323919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.324130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.324156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.490 [2024-07-14 04:02:05.324184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.490 [2024-07-14 04:02:05.324346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.490 [2024-07-14 04:02:05.324487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.490 [2024-07-14 04:02:05.324509] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.324527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 [2024-07-14 04:02:05.326453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 Malloc0 00:29:46.490 04:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.490 04:02:05 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.490 04:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.490 04:02:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.490 [2024-07-14 04:02:05.335436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.490 [2024-07-14 04:02:05.335810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.336014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.336040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.490 [2024-07-14 04:02:05.336056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.490 [2024-07-14 04:02:05.336189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.490 [2024-07-14 04:02:05.336371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.490 [2024-07-14 04:02:05.336392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.336406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 04:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.490 04:02:05 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.490 04:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.490 04:02:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.490 [2024-07-14 04:02:05.338418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 04:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.490 04:02:05 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.490 04:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.490 04:02:05 -- common/autotest_common.sh@10 -- # set +x 00:29:46.490 [2024-07-14 04:02:05.347984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.490 [2024-07-14 04:02:05.348307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.348517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-14 04:02:05.348543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1788030 with addr=10.0.0.2, port=4420 00:29:46.490 [2024-07-14 04:02:05.348559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1788030 is same with the state(5) to be set 00:29:46.490 [2024-07-14 04:02:05.348724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1788030 (9): Bad file descriptor 00:29:46.490 [2024-07-14 04:02:05.348887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.490 [2024-07-14 04:02:05.348909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.490 [2024-07-14 04:02:05.348923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.490 [2024-07-14 04:02:05.348981] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.490 [2024-07-14 04:02:05.351066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.490 04:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.490 04:02:05 -- host/bdevperf.sh@38 -- # wait 2507060 00:29:46.490 [2024-07-14 04:02:05.360296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.749 [2024-07-14 04:02:05.435542] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:54.862 00:29:54.862 Latency(us) 00:29:54.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.862 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:54.862 Verification LBA range: start 0x0 length 0x4000 00:29:54.862 Nvme1n1 : 15.01 9500.22 37.11 15748.27 0.00 5054.96 989.11 19903.53 00:29:54.862 =================================================================================================================== 00:29:54.863 Total : 9500.22 37.11 15748.27 0.00 5054.96 989.11 19903.53 00:29:55.120 04:02:13 -- host/bdevperf.sh@39 -- # sync 00:29:55.120 04:02:13 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.120 04:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.120 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:29:55.120 04:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.120 04:02:13 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:55.120 04:02:13 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:55.120 04:02:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:55.120 04:02:13 -- nvmf/common.sh@116 -- # sync 00:29:55.120 04:02:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:55.120 04:02:13 -- nvmf/common.sh@119 -- # set +e 00:29:55.120 04:02:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:55.120 04:02:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:55.120 rmmod nvme_tcp 00:29:55.120 rmmod nvme_fabrics 00:29:55.120 rmmod nvme_keyring 00:29:55.120 04:02:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:55.120 04:02:13 -- nvmf/common.sh@123 -- # set -e 00:29:55.120 04:02:13 -- nvmf/common.sh@124 -- # return 0 00:29:55.120 04:02:13 -- nvmf/common.sh@477 -- # '[' -n 2507794 ']' 00:29:55.120 04:02:13 -- nvmf/common.sh@478 -- # killprocess 2507794 00:29:55.120 04:02:13 -- common/autotest_common.sh@926 -- # '[' -z 2507794 ']' 00:29:55.120 04:02:13 -- common/autotest_common.sh@930 -- # kill -0 2507794 00:29:55.120 04:02:13 -- common/autotest_common.sh@931 -- # uname 00:29:55.120 04:02:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:55.121 04:02:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2507794 00:29:55.121 04:02:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:55.121 04:02:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:55.121 04:02:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2507794' 00:29:55.121 killing process with pid 2507794 00:29:55.121 04:02:14 -- common/autotest_common.sh@945 -- # kill 2507794 00:29:55.121 04:02:14 -- common/autotest_common.sh@950 -- # wait 2507794 00:29:55.380 04:02:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:55.380 04:02:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:55.380 04:02:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:55.380 04:02:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.380 04:02:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:55.380 04:02:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.380 04:02:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.380 04:02:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.957 04:02:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:57.957 00:29:57.957 real 0m23.082s 00:29:57.957 user 1m2.697s 00:29:57.957 sys 0m4.139s 00:29:57.957 04:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.957 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:29:57.957 ************************************ 00:29:57.957 END TEST nvmf_bdevperf 00:29:57.957 ************************************ 00:29:57.957 04:02:16 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:57.957 04:02:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:57.957 04:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:57.957 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:29:57.957 ************************************ 00:29:57.957 START TEST nvmf_target_disconnect 00:29:57.957 ************************************ 00:29:57.957 04:02:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:57.957 * Looking for test storage... 00:29:57.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.957 04:02:16 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.957 04:02:16 -- nvmf/common.sh@7 -- # uname -s 00:29:57.957 04:02:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.957 04:02:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.957 04:02:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.957 04:02:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.957 04:02:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.957 04:02:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.957 04:02:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.957 04:02:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.957 04:02:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.957 04:02:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.958 04:02:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.958 04:02:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.958 04:02:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.958 04:02:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.958 04:02:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.958 04:02:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.958 04:02:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.958 04:02:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.958 04:02:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.958 04:02:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.958 04:02:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.958 04:02:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.958 04:02:16 -- paths/export.sh@5 -- # export PATH 00:29:57.958 04:02:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.958 04:02:16 -- nvmf/common.sh@46 -- # : 0 00:29:57.958 04:02:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:57.958 04:02:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:57.958 04:02:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:57.958 04:02:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.958 04:02:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.958 04:02:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:57.958 04:02:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:57.958 04:02:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:57.958 04:02:16 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:57.958 04:02:16 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:57.958 04:02:16 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:57.958 04:02:16 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:57.958 04:02:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:57.958 04:02:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.958 04:02:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:57.958 04:02:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:57.958 04:02:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:57.958 04:02:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.958 04:02:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.958 04:02:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.958 04:02:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:57.958 04:02:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:57.958 04:02:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:57.958 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:29:59.866 04:02:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:59.866 04:02:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:59.866 04:02:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:59.866 04:02:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:59.866 04:02:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:59.866 04:02:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:59.866 04:02:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:59.866 04:02:18 -- nvmf/common.sh@294 -- # net_devs=() 00:29:59.866 04:02:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:59.866 04:02:18 -- nvmf/common.sh@295 -- # e810=() 00:29:59.866 04:02:18 -- nvmf/common.sh@295 -- # local -ga e810 00:29:59.866 04:02:18 -- nvmf/common.sh@296 -- # x722=() 00:29:59.866 04:02:18 -- nvmf/common.sh@296 -- # local -ga x722 00:29:59.866 04:02:18 -- nvmf/common.sh@297 -- # mlx=() 00:29:59.866 04:02:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:59.866 04:02:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.866 04:02:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:59.866 04:02:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:59.867 04:02:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:59.867 04:02:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.867 04:02:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.867 04:02:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.867 04:02:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.867 04:02:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:59.867 04:02:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.867 04:02:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.867 04:02:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.867 04:02:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.867 04:02:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.867 04:02:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.867 04:02:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.867 04:02:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.867 04:02:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.867 04:02:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.867 04:02:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.867 04:02:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.867 04:02:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:59.867 04:02:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:59.867 04:02:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:59.867 04:02:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.867 04:02:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.867 04:02:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.867 04:02:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:59.867 04:02:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.867 04:02:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.867 04:02:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:59.867 04:02:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.867 04:02:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.867 04:02:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:59.867 04:02:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:59.867 04:02:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.867 04:02:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.867 04:02:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.867 04:02:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.867 04:02:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:59.867 04:02:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.867 04:02:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.867 04:02:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.867 04:02:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:59.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:29:59.867 00:29:59.867 --- 10.0.0.2 ping statistics --- 00:29:59.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.867 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:29:59.867 04:02:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:59.867 00:29:59.867 --- 10.0.0.1 ping statistics --- 00:29:59.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.867 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:59.867 04:02:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.867 04:02:18 -- nvmf/common.sh@410 -- # return 0 00:29:59.867 04:02:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:59.867 04:02:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.867 04:02:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:59.867 04:02:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.867 04:02:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:59.867 04:02:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:59.867 04:02:18 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:59.867 04:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.867 04:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.867 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:29:59.867 ************************************ 00:29:59.867 START TEST nvmf_target_disconnect_tc1 00:29:59.867 ************************************ 00:29:59.867 04:02:18 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:29:59.867 04:02:18 -- host/target_disconnect.sh@32 -- # set +e 00:29:59.867 04:02:18 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.867 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.867 [2024-07-14 04:02:18.647736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.867 [2024-07-14 04:02:18.648019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.867 [2024-07-14 04:02:18.648051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe90510 with addr=10.0.0.2, port=4420 00:29:59.867 [2024-07-14 04:02:18.648086] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:59.867 [2024-07-14 04:02:18.648107] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:59.867 [2024-07-14 04:02:18.648121] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:59.867 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:59.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:59.867 Initializing NVMe Controllers 00:29:59.867 04:02:18 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:59.867 04:02:18 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:59.867 04:02:18 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:29:59.867 04:02:18 -- common/autotest_common.sh@1132 -- # return 0 00:29:59.867 04:02:18 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:59.867 04:02:18 -- host/target_disconnect.sh@41 -- # set -e 00:29:59.867 00:29:59.867 real 0m0.099s 00:29:59.867 user 0m0.044s 00:29:59.867 sys 0m0.054s 00:29:59.867 04:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.867 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:29:59.867 ************************************ 00:29:59.867 END TEST nvmf_target_disconnect_tc1 00:29:59.867 ************************************ 00:29:59.867 04:02:18 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:59.867 04:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.867 04:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.867 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:29:59.867 ************************************ 00:29:59.867 START TEST nvmf_target_disconnect_tc2 00:29:59.867 ************************************ 00:29:59.867 04:02:18 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:29:59.867 04:02:18 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:59.867 04:02:18 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:59.867 04:02:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:59.867 04:02:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:59.867 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:29:59.867 04:02:18 -- nvmf/common.sh@469 -- # nvmfpid=2510942 00:29:59.867 04:02:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:59.867 04:02:18 -- nvmf/common.sh@470 -- # waitforlisten 2510942 00:29:59.867 04:02:18 -- common/autotest_common.sh@819 -- # '[' -z 2510942 ']' 00:29:59.867 04:02:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.867 04:02:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:59.867 04:02:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.867 04:02:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:59.867 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:29:59.867 [2024-07-14 04:02:18.731463] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:59.867 [2024-07-14 04:02:18.731549] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.867 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.867 [2024-07-14 04:02:18.800355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.126 [2024-07-14 04:02:18.890811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:00.126 [2024-07-14 04:02:18.890980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.126 [2024-07-14 04:02:18.890998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.126 [2024-07-14 04:02:18.891010] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.126 [2024-07-14 04:02:18.891104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:00.126 [2024-07-14 04:02:18.891231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:00.126 [2024-07-14 04:02:18.891297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:00.126 [2024-07-14 04:02:18.891300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:01.060 04:02:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:01.060 04:02:19 -- common/autotest_common.sh@852 -- # return 0 00:30:01.060 04:02:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:01.060 04:02:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.060 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.060 04:02:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.060 04:02:19 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.060 04:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.060 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.060 Malloc0 00:30:01.060 04:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.060 04:02:19 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:01.060 04:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.060 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.060 [2024-07-14 04:02:19.724536] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.060 04:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.060 04:02:19 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.060 04:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.060 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.060 04:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.060 04:02:19 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.060 04:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.060 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.060 04:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.060 04:02:19 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.060 04:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.060 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.060 [2024-07-14 04:02:19.752761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.060 04:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.060 04:02:19 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.060 04:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.060 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:30:01.060 04:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.060 04:02:19 -- host/target_disconnect.sh@50 -- # reconnectpid=2511100 00:30:01.060 04:02:19 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.060 04:02:19 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:01.060 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.972 04:02:21 -- host/target_disconnect.sh@53 -- # kill -9 2510942 00:30:02.972 04:02:21 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 [2024-07-14 04:02:21.777145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 [2024-07-14 04:02:21.777476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Write completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.972 [2024-07-14 04:02:21.777798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.972 Read completed with error (sct=0, sc=8) 00:30:02.972 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Read completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 Write completed with error (sct=0, sc=8) 00:30:02.973 starting I/O failed 00:30:02.973 [2024-07-14 04:02:21.778125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.973 [2024-07-14 04:02:21.778332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.778503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.778530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.778700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.778894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.778922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.779094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.779256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.779284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.779480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.779633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.779675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.779874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.780059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.780085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.780260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.780431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.780456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.780612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.780824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.780853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.781039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.781200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.781226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.781410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.781606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.781635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.781891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.782075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.782100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.782302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.782554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.782582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.782767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.782974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.783000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.783157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.783365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.783390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.783596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.783802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.783827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.783990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.784146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.784188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.784372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.784608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.784637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.784862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.785050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.785076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.785294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.785470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.785512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.785717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.785919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.785946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.786103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.786287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.786312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.786488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.786676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.786702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.973 qpair failed and we were unable to recover it. 00:30:02.973 [2024-07-14 04:02:21.786936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.973 [2024-07-14 04:02:21.787110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.787135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.787308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.787531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.787559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.787759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.787967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.787993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.788175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.788326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.788352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.788587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.788791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.788816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.788992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.789146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.789185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.789414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.789644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.789685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.789881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.790067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.790095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.790276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.790422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.790449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.790743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.790973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.791000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.791175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.791352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.791394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.791594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.791822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.791851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.792060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.792255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.792280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.792433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.792590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.792616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.792806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.793017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.793043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.793200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.793353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.793378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.794093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.794379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.794430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.794664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.794845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.794891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.795055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.795285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.795313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.795520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.795683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.795708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.795893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.796124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.796159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.796353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.796524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.796553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.796781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.796973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.797000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.797183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.797357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.797383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.797615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.797833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.797858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.798050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.798195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.798220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.798396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.798552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.798578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.798761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.798944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.798970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.799152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.799325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.799350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.799533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.799679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.799708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.974 qpair failed and we were unable to recover it. 00:30:02.974 [2024-07-14 04:02:21.799915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.800141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.974 [2024-07-14 04:02:21.800169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.800364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.800547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.800573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.800777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.800960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.800986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.801195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.801402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.801427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.801578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.801758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.801783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.801960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.802141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.802174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.802378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.802560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.802585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.802793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.802998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.803024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.803175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.803353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.803378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.803590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.803771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.803800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.804012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.804184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.804210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.804362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.804537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.804562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.804770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.804922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.804947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.805125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.805332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.805357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.805502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.805706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.805731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.805969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.806143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.806168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.806344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.806526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.806556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.806752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.806965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.806991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.807165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.807373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.807398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.807551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.807732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.807761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.807915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.808098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.808123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.808303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.808482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.808506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.808656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.808849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.808894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.809096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.809257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.809283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.809505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.809702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.809726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.809907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.810113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.810139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.810308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.810457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.810483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.810661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.810837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.810879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.811057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.811254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.811290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.811485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.811667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.811696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.975 [2024-07-14 04:02:21.811914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.812115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.975 [2024-07-14 04:02:21.812141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.975 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.812286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.812465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.812507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.812681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.812903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.812931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.813126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.813353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.813378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.813536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.813711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.813737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.813881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.814065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.814091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.814262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.814466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.814496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.814671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.814849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.814887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.815468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.815681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.815708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.815918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.816162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.816190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.816362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.816577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.816629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.816864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.817061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.817086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.817295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.817530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.817558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.817736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.817888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.817914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.818137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.818313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.818338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.818512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.818733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.818760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.818928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.819121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.819157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.819386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.819570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.819595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.819796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.819983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.820012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.820203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.820425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.820453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.820640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.820840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.820873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.821076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.821261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.821289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.821457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.821654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.821681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.821860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.822014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.822039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.822217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.822390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.976 [2024-07-14 04:02:21.822415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.976 qpair failed and we were unable to recover it. 00:30:02.976 [2024-07-14 04:02:21.822595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.822781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.822806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.822969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.823157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.823184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.823386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.823538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.823579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.823805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.824004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.824032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.824207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.824400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.824429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.824666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.824858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.824904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.825071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.825264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.825312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.825481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.825659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.825684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.825892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.826067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.826097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.826297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.826526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.826554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.826750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.826957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.826985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.827183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.827369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.827397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.827598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.827793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.827821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.828004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.828202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.828230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.828397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.828591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.828619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.828794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.828979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.829005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.829159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.829358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.829386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.829581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.829809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.829837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.830013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.830212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.830240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.830414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.830623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.830648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.830850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.831029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.831057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.831258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.831496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.831521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.831706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.831892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.831917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.832123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.832321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.832346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.832502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.832680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.832704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.832977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.833125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.833166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.833370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.833591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.833619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.833817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.833999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.834030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.834210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.834359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.834384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.834591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.834780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.834807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.977 qpair failed and we were unable to recover it. 00:30:02.977 [2024-07-14 04:02:21.835019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.977 [2024-07-14 04:02:21.835192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.835220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.835419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.835616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.835644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.835843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.836062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.836090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.836310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.836503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.836528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.836705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.836904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.836934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.837167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.837346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.837375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.837545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.837741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.837768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.837982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.838133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.838158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.838335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.838532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.838559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.838781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.838981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.839009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.839215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.839406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.839433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.839657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.839854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.839890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.840069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.840232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.840257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.840435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.840603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.840628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.840797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.840980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.841009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.841188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.841353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.841381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.841574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.841729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.841755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.841905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.842101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.842129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.842304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.842474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.842515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.842700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.842942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.842968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.843119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.843372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.843400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.843625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.843861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.843894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.844079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.844233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.844274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.844471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.844663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.844691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.844891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.845043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.845068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.845255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.845453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.845481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.845647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.845874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.845900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.846101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.846375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.846422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.846598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.846782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.846810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.846994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.847162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.847191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.978 [2024-07-14 04:02:21.847422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.847628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.978 [2024-07-14 04:02:21.847653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.978 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.847832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.848066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.848095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.848327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.848552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.848580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.848747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.848992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.849018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.849230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.849541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.849595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.849794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.849993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.850020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.850229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.850441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.850466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.850671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.850828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.850854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.851032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.851228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.851258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.851473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.851670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.851698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.851942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.852129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.852175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.852347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.852515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.852544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.852755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.852926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.852953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.853135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.853362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.853390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.853623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.853806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.853832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.854025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.854251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.854277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.854460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.854614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.854640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.854871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.855072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.855098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.855264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.855435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.855461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.855646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.855814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.855842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.856055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.856207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.856232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.856426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.856600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.856642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.856835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.857042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.857072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.857272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.857459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.857487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.857681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.857843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.857888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.858068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.858275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.858305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.858531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.858699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.858729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.858924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.859095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.859124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.859345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.859570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.859599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.859802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.859967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.860013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.860204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.860426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-07-14 04:02:21.860454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-07-14 04:02:21.860628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.860781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.860806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.861006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.861192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.861217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.861404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.861607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.861638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.861839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.862049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.862078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.862293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.862444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.862475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.862682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.862855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.862890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.863070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.863224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.863267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.863462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.863660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.863687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.863864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.864014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.864040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.864219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.864372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.864417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.864582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.864787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.864812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.864979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.865163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.865190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.865371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.865569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.865597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.865760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.865956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.865986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.866154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.866325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.866361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.866566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.866792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.866821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.866995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.867223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.867248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.867424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.867625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.867653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.867883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.868127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.868153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.868313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.868520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.868549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.868748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.868914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.868942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.869135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.869340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.869368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.869569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.869766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.869795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.869972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.870170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.870199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.870431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.870635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.870667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.870870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.871045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.871070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.871275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.871470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.871498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-07-14 04:02:21.871703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-07-14 04:02:21.871882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.871908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.872077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.872246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.872274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.872472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.872665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.872692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.872863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.873019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.873061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.873230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.873420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.873448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.873617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.873822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.873850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.874046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.874259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.874307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.874510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.874707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.874739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.874930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.875086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.875128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.875321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.875486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.875527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.875726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.875900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.875944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.876137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.876335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.876363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.876569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.876730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.876757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.876995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.877172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.877212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.877395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.877578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.877608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-07-14 04:02:21.877833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-07-14 04:02:21.878036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.878065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.878271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.878482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.878506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.878649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.878809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.878850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.879055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.879282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.879311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.879509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.879666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.879694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.879904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.880109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.880134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.880344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.880498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.880523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.880703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.880854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.880885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.881055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.881225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.881253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.881472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.881656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.881681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.881981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.882175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.882205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.882406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.882661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.882701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.882908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.883127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.883155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.883388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.883605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.883633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.883817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.884044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.884071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.884285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.884542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.884590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.884826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.884985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.885010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.885217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.885410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.885440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.885661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.885904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.885948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.886133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.886350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.886376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.886620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.886809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.886836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.887041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.887255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.887301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.887543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.887726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.887754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.887980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.888209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.888236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.888438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.888656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.888704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.888930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.889123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.889151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.889342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.889538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.889565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.889738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.889922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.889949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.890128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.890385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.890414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.890619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.890811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.890841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-07-14 04:02:21.891044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-07-14 04:02:21.891307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.891356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.891583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.891747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.891775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.891997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.892178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.892203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.892457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.892706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.892755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.892956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.893176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.893204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.893437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.893722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.893773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.894004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.894152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.894178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.894330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.894534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.894562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.894753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.894939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.894968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.895193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.895390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.895417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.895641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.895803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.895833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.896011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.896207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.896235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.896444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.896644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.896672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.896876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.897074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.897102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.897270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.897429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.897459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.897650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.897845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.897888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.898091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.898290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.898319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.898513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.898742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.898768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.898946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.899115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.899144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.899348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.899545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.899575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.899795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.899986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.900016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.900220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.900391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.900419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.900586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.900778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.900807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.901013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.901188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.901216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.901434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.901599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.901626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.901855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.902059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.902086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.902303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.902529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.902556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.902783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.902982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.903013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.903220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.903381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.903409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.903579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.903820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.903846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-07-14 04:02:21.904057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.904261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-07-14 04:02:21.904289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-14 04:02:21.904515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-14 04:02:21.904721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-14 04:02:21.904746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-07-14 04:02:21.904967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-14 04:02:21.905135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-07-14 04:02:21.905165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.905342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.905613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.905665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.905892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.906063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.906093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.906328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.906524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.906552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.906755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.906958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.906987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.907190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.907391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.907419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.907586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.907806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.907834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.908030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.908250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.908277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.908485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.908641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.908666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.908820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.908996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.909024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.909255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.909434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.909461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-07-14 04:02:21.909660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-07-14 04:02:21.909852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.909886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.910087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.910258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.910286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.910480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.910671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.910699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.910906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.911098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.911126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.911331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.911503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.911532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.911728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.911951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.911980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.912184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.912405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.912434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.912746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.913097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.913165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.913391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.913623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.913649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.913807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.914013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.914041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.914240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.914421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.914446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.914623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.914798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.914826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.915025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.915208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.915233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.915410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.915557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.915582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.915748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.915954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.915980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.916141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.916291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.916317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.916467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.916617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.916643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.916819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.916987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.917015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.917214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.917436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.917464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.917626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.917879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.917908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.918087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.918346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.918395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.918573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.918796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.918824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.919022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.919254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.919279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.919435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.919798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.919864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.920067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.920295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.920343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.920554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.920751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.920778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.921007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.921202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.921230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-07-14 04:02:21.921422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.921617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-07-14 04:02:21.921645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.921826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.922067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.922095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.922299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.922476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.922518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.922736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.922969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.922995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.923150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.923408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.923454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.923619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.923842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.923875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.924080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.924270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.924295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.924474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.924651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.924676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.924872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.925044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.925073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.925271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.925490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.925518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.925728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.925889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.925916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.926073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.926309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.926338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.926669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.926893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.926921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.927126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.927359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.927408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.927587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.927788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.927816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.927987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.928188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.928213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.928418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.928605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.928635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.928826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.929061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.929090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.929275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.929419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.929444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.929624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.929804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.929832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.930026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.930205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.930247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.930439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.930637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.930662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.930846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.931007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.931033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.931190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.931393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.931425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.931600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.931792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.931820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.932019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.932214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.932244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.932445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.932673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.932701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.932904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.933064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-07-14 04:02:21.933107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-07-14 04:02:21.933332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.933521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.933548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.933761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.933994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.934023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.934197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.934480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.934542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.934785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.934942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.934968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.935177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.935365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.935393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.935621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.935823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.935855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.936063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.936247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.936272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.936442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.936675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.936703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.936899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.937067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.937095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.937299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.937579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.937630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.937838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.938022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.938052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.938282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.938502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.938530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.938755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.938937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.938965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.939186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.939415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.939463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.939684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.939918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.939947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.940118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.940338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.940370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.940686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.940984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.941013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.941211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.941406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.941435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.941613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.941830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.941858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.942070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.942267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.942294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.942488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.942713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.942741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.942915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.943125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.943149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.943359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.943540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.943585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.943779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.943965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.943990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.944187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.944409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.944436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.944629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.944824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.944856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.945047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.945258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.945314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.945511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.945703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.945730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.945926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.946096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.946124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.946317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.946486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.946514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-07-14 04:02:21.946712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-07-14 04:02:21.946860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.946894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.947046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.947222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.947266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.947497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.947684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.947711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.947913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.948105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.948133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.948322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.948515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.948542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.948739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.948909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.948939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.949139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.949332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.949359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.949528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.949720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.949748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.949948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.950153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.950181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.950407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.950600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.950628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.950825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.951026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.951054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.951246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.951438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.951465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.951640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.951786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.951811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.952027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.952216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.952241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.952439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.952638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.952666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.952873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.953077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.953105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.953342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.953541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.953569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.953791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.953982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.954011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.954206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.954402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.954430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.954601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.954774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.954801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.955012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.955171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.955213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.955380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.955577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.955604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.955784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.955974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.956009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.956209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.956381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.956409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.956627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.956802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.956832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.957038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.957216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.957242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.957416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.957610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.957638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.957840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.958056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.958086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.958282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.958475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.958503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-07-14 04:02:21.958695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.958891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-07-14 04:02:21.958935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.959097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.959331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.959359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.959589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.959738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.959763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.959975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.960126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.960152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.960331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.960503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.960528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.960710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.960910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.960940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.961164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.961337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.961364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.961545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.961691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.961732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.961958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.962160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.962188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.962396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.962573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.962599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.962797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.963022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.963052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.963249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.963416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.963445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.963647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.963824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.963851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.964087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.964270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.964295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.964493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.964710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.964738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.964940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.965189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.965254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.965450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.965680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.965705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.965919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.966100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.966127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.966327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.966538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.966592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.966792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.966946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.966989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.967189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.967348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.967375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.967533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.967752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.967779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-07-14 04:02:21.967978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-07-14 04:02:21.968174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.968203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.968429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.968595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.968625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.968791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.968987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.969016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.969189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.969386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.969414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.969576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.969772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.969802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.970019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.970248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.970276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.970502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.970695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.970722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.970919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.971093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.971120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.971314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.971545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.971573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.971785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.971986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.972014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.972183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.972405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.972434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.972829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.973069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.973095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.973252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.973440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.973469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.973666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.973906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.973935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.974136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.974333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.974363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.974572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.974796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.974824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.975030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.975289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.975340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.975574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.975772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.975800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.976034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.976207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.976235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.976431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.976598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.976625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.976820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.977057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.977085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.977295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.977452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.977476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.977655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.977807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.977834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.978043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.978265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.978293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.978491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.978724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.978782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.978983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.979158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.979200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.979368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.979567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.979592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.979743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.979898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.979941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.980157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.980308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.980349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.980555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.980733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.980758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-07-14 04:02:21.980967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-07-14 04:02:21.981160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.981188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.981398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.981573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.981615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.981837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.982043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.982070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.982280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.982451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.982480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.982673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.982876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.982905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.983070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.983272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.983300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.983499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.983693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.983722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.983921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.984120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.984148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.984319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.984515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.984540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.984733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.984902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.984932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.985132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.985331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.985358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.985538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.985687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.985713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.985929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.986122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.986150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.986317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.986486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.986514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.986687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.986886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.986926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.987093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.987264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.987291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.987516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.987711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.987739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.987957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.988158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.988185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.988356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.988554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.988580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.988787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.988940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.988966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.989133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.989337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.989364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.989534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.989751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.989778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.989972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.990149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.990174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.990350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.990528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.990553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.990731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.990935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.990965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.991188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.991383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.991411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.991635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.991827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.991855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.992075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.992265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.992293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-07-14 04:02:21.992489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.992684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-07-14 04:02:21.992712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.992939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.993172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.993200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.993374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.993640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.993693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.993879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.994083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.994111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.994319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.994547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.994575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.994764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.995094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.995158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.995368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.995611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.995665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.995876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.996062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.996090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.996311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.996474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.996502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.996723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.996942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.996971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.997196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.997387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.997433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.997627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.997824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.997852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.998023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.998254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.998281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.998641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.998858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.998892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.999114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.999441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.999500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:21.999683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.999907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:21.999936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.000131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.000348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.000376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.000683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.000907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.000941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.001142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.001421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.001471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.001684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.001915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.001944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.002126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.002303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.002328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.002534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.002757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.002785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.002958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.003197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.003247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.003478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.003720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.003767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.003977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.004172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.004201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.004424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.004595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.004623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.004816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.004993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.005023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.005227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.005414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.005447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.005641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.005859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.005891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.006113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.006297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.006327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-07-14 04:02:22.006551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.006772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-07-14 04:02:22.006796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.006996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.007156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.007199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.007390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.007585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.007613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.007809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.007973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.008002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.008199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.008418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.008446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.008653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.008900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.008929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.009129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.009320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.009347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.009628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.009860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.009910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.010109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.010455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.010511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.010715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.010987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.011017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.011200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.011417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.011441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.011659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.011852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.011888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.012080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.012274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.012302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.012492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.012662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.012687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.012980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.013377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.013429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.013659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.013900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.013928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.014137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.014313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.014341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.014541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.014728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.014757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.014967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.015191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.015219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.015561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.015807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.015835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.016039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.016262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.016290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.016550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.016787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.016812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.016996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.017175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.017201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.017356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.017566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.017591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.017792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.017992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.018020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.018218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.018522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-07-14 04:02:22.018588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-07-14 04:02:22.018784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.018983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.019013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.019225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.019383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.019423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.019627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.019822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.019852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.020089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.020314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.020342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.020552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.020702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.020742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.020927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.021151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.021179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.021379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.021719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.021777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.022004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.022214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.022261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.022448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.022673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.022698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.022883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.023034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.023059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.023261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.023456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.023484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.023683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.023877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.023920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.024109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.024344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.024372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.024699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.024954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.024981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.025182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.025485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.025554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.025738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.025950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.025979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.026154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.026345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.026373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.026571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.026796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.026824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.026999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.027226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.027254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.027451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.027659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.027709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.027905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.028099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.028128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.028293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.028481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.028508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.028721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.028876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.028902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.029150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.029317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.029341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.029538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.029762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.029789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.030000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.030203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.030231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.030431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.030651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.030676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.030849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.031040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.031065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.031274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.031429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.031457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.031663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.031834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-07-14 04:02:22.031862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-07-14 04:02:22.032049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.032207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.032233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.032406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.032584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.032610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.032808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.033009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.033038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.033231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.033404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.033432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.033626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.033845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.033877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.034103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.034295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.034323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.034547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.034749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.034774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.034931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.035119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.035160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.035357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.035556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.035584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.035759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.035936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.035962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.036139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.036328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.036355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.036578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.036772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.036802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.037052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.037252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.037280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.037490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.037672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.037698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.037879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.038056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.038081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.038265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.038452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.038478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.038653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.038844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.038884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.039086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.039235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.039261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.039435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.039640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.039667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.039861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.040067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.040094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.040268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.040469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.040495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.040679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.040892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.040918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.041070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.041214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.041240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.041421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.041574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.041599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.041777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.041974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.042003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.042172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.042376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.042401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.042633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.042801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.042829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.043013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.043167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.043192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.043368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.043553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.043577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.043725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.043960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.043989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-07-14 04:02:22.044186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-07-14 04:02:22.044358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.044382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.044537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.044732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.044760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.044932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.045151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.045180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.045387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.045564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.045590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.045828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.046016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.046042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.046228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.046377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.046422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.046620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.046812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.046839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.047046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.047248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.047273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.047446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.047599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.047624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.047771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.047995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.048024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.048240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.048549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.048605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.048827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.049013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.049040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.049225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.049382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.049406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.049610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.049782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.049807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.049990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.050161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.050191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.050370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.050564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.050589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.050772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.050926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.050951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.051106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.051285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.051312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.051486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.051698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.051723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.051914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.052095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.052120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.052301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.052480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.052505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.052685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.052833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.052858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.053016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.053172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.053197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.053370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.053544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.053568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.053782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.053957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.053982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.054135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.054342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.054367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.054518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.054692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.054718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.054878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.055090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.055116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.055302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.055493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.055520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.055722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.055892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.055921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-07-14 04:02:22.056134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-07-14 04:02:22.056313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.056338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.056493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.056700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.056726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.056876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.057034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.057060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.057213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.057387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.057412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.057570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.057721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.057746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.057917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.058100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.058127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.058302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.058453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.058479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.058685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.058824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.058871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.059106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.059307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.059334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.059516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.059692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.059717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.059924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.060081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.060108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.060287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.060448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.060473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.060675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.060859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.060892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.061047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.061258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.061283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.061429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.061624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.061652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.061824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.062039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.062065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.062224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.062429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.062457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.062630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.062814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.062839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.063006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.063162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.063187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.063345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.063502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.063527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.063736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.063920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.063951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.064174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.064324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.064349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.064533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.064767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.064800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.065001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.065278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.065329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.065559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.065737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.065765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.065939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.066103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.066145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.066348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.066512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.066540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.066707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.066900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.066930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.067098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.067295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.067323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.067516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.067696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.067723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-07-14 04:02:22.067899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.068093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-07-14 04:02:22.068121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.068293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.068533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.068559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.068716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.068916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.068950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.069157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.069362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.069388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.069543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.069717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.069744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.069938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.070115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.070143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.070371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.070519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.070544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.070702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.070886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.070912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.071071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.071307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.071333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.071536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.071764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.071792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.071964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.072199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.072228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.072406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.072552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.072578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.072752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.072990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.073021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.073180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.073358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.073384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.073602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.073800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.073825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.074014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.074222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.074247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.074431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.074584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.074609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.074768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.074943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.074987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.075187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.075421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.075448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.075683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.075852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.075891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.076095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.076319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.076348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.076660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.076884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.076914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.077112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.077445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.077496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.077720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.077989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.078018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-07-14 04:02:22.078213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-07-14 04:02:22.078435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.078464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.078784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.079042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.079071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.079289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.079557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.079606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.079829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.080019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.080061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.080236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.080442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.080467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.080622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.080829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.080855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.081102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.081345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.081399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.081642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.081787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.081812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.081992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.082181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.082209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.082392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.082630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.082658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.082818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.083050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.083079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.083247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.083541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.083594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.083829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.084022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.084051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.084247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.084458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.084483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.084684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.084904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.084933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.085132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.085342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.085382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.085607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.085807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.085832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.086017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.086216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.086244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.086446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.086636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.086664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.086863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.087063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.087103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.087306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.087478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.087503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.087707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.087908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.087938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.088128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.088333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.088358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.088508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.088664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.088689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.088870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.089053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.089079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.089265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.089417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.089442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.089598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.089820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.089848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.090081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.090252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.090279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.090450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.090644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.090672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.090890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.091059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-07-14 04:02:22.091089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-07-14 04:02:22.091284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.091489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.091515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.091668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.091839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.091864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.092078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.092271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.092298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.092492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.092712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.092737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.092941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.093123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.093149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.093297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.093488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.093517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.093715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.093915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.093944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.094140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.094366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.094394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.094595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.094788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.094815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.095050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.095226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.095250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.095400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.095625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.095653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.095821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.096003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.096032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.096255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.096549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.096601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.096831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.097009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.097038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.097228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.097428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.097453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.097637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.097815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.097841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.098048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.098220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.098247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.098446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.098600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.098626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.098779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.099009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.099039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.099280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.099639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.099702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.099904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.100096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.100124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.100330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.100534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.100559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.100763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.100942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.100969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.101154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.101305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.101331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.101540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.101738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.101766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.101994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.102175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.102200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.102377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.102578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.102621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.102798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.102964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.102993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.103196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.103365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-07-14 04:02:22.103394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-07-14 04:02:22.103601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.103802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.103830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.104049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.104271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.104299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.104485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.104691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.104731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.104964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.105118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.105144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.105350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.105554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.105579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.105759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.105942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.105968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.106117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.106322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.106352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.106552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.106753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.106780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.106960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.107142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.107167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.107343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.107544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.107569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.107752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.107952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.107983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.108193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.108340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.108365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.108558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.108736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.108762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.108945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.109096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.109121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.109335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.109483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.109509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.109686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.109858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.109890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.110046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.110226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.110252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.110403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.110578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.110603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.110760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.110971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.111000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.111195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.111396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.111421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.111603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.111781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.111806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.112013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.112206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.112234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.112434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.112670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.112695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.112879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.113035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.113060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.113212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.113384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.113409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.113586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.113812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.113840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.114016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.114208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.114237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.114434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.114630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.114658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.270 qpair failed and we were unable to recover it. 00:30:03.270 [2024-07-14 04:02:22.114844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-07-14 04:02:22.115050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.115076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.115233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.115416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.115443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.115657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.115885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.115915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.116140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.116370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.116398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.116600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.116776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.116801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.116969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.117166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.117194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.117361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.117539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.117567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.117760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.117931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.117960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.118158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.118335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.118360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.118535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.118705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.118733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.118972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.119165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.119193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.119416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.119592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.119620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.119792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.120029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.120058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.120293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.120604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.120654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.120849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.121059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.121084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.121246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.121422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.121447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.121626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.121802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.121832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.122034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.122256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.122281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.122454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.122597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.122622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.122774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.122981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.123008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.123153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.123358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.123383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.123585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.123794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.123821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.124029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.124232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.124260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.124451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.124649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.124676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.271 qpair failed and we were unable to recover it. 00:30:03.271 [2024-07-14 04:02:22.124883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.271 [2024-07-14 04:02:22.125065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.125090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.125333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.125670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.125721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.125955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.126180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.126206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.126404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.126582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.126606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.126776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.126996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.127025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.127258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.127438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.127463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.127643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.127819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.127844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.128017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.128199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.128226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.128430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.128611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.128636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.128835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.129043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.129069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.129244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.129436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.129466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.129694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.129878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.129904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.130107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.130316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.130341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.130552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.130742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.130769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.130970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.131140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.131168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.131337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.131568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.131594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.131768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.131990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.132019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.132221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.132416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.132443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.132614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.132841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.132881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.133092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.133268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.133293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.133470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.133627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.133651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.133804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.133974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.134000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.134194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.134392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.134420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.134606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.134783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.134807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.134954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.135103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.135128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.135313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.135492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.135518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.272 [2024-07-14 04:02:22.135669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.135876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.272 [2024-07-14 04:02:22.135902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.272 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.136057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.136260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.136289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.136479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.136637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.136667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.136816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.136993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.137019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.137206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.137388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.137413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.137559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.137737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.137762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.137940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.138088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.138113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.138332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.138476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.138501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.138711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.138908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.138934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.139168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.139353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.139378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.139531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.139684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.139709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.139893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.140097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.140123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.140330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.140481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.140511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.140718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.140899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.140925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.141109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.141312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.141340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.141505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.141668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.141697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.141895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.142102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.142131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.142330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.142525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.142553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.142754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.142938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.142964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.143175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.143383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.143410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.143586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.143785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.143810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.144014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.144212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.144239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.144436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.144580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.144609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.144818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.145053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.145082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.145355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.145666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.145718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.145943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.146135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.146164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.146368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.146551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.146576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.146731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.146930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.146956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.147133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.147312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.147337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.273 [2024-07-14 04:02:22.147515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.147742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.273 [2024-07-14 04:02:22.147770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.273 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.147965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.148169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.148194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.148368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.148593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.148620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.148824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.149050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.149076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.149260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.149432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.149456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.149661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.149838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.149864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.150020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.150225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.150251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.150430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.150608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.150633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.150830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.151049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.151078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.151248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.151427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.151452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.151657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.151863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.151897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.152050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.152208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.152252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.152450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.152641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.152669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.152877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.153056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.153083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.153317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.153540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.153566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.153766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.153955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.153984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.154162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.154315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.154340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.154517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.154699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.154739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.154976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.155170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.155198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.155401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.155575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.155600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.155754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.155976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.156004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.156210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.156427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.156451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.156636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.156819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.156845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.157007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.157180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.157208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.157436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.157604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.157632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.157807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.157993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.158020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.158250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.158481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.158509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.158675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.158895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.274 [2024-07-14 04:02:22.158924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.274 qpair failed and we were unable to recover it. 00:30:03.274 [2024-07-14 04:02:22.159118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.159342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.159371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.159579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.159803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.159830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.160015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.160235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.160263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.160616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.160848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.160880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.161070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.161218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.161262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.161468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.161652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.161678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.161860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.162027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.162052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.162209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.162362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.162386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.162566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.162714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.162741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.162925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.163080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.163107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.163303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.163525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.163551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.163695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.163879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.163913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.164123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.164307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.164334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.164585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.164788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.164813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.165023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.165177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.165202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.165355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.165567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.165592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.165786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.165974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.166003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.166204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.166360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.166385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.166579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.166759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.166784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.166974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.167163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.167191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.167373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.167547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.167572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.167724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.167924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.167954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.168183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.168378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.168407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.168599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.168824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.168851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.169080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.169284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.169309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.169517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.169696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.169721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.169908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.170091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.170118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.170337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.170509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.170537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.170697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.170875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.170903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.275 qpair failed and we were unable to recover it. 00:30:03.275 [2024-07-14 04:02:22.171103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.275 [2024-07-14 04:02:22.171308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.171334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.171530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.171769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.171794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.171951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.172104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.172129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.172289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.172468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.172494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.172641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.172815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.172840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.173003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.173176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.173202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.173410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.173612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.173637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.173789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.173990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.174015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.174195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.174368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.174395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.174581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.174736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.174761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.174969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.175120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.175162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.175336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.175574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.175603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.175813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.175991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.176018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.176202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.176401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.176428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.176621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.176840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.176875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.177051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.177269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.177294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.177479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.177680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.177709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.177895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.178077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.178102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.178280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.178435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.178460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.178611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.178753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.178778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.178953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.179148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.179176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.276 qpair failed and we were unable to recover it. 00:30:03.276 [2024-07-14 04:02:22.179369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.179568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.276 [2024-07-14 04:02:22.179596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.179767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.179966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.179993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.180177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.180331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.180358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.180514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.180693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.180718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.180877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.181029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.181055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.181203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.181381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.181422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.181616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.181815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.181844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.182050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.182207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.182234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.277 [2024-07-14 04:02:22.182438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.182619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.277 [2024-07-14 04:02:22.182643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.277 qpair failed and we were unable to recover it. 00:30:03.549 [2024-07-14 04:02:22.182876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-07-14 04:02:22.183076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-07-14 04:02:22.183107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-07-14 04:02:22.183331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-07-14 04:02:22.183521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-07-14 04:02:22.183549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.183727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.183903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.183929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.184110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.184287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.184312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.184492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.184667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.184692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.184892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.185097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.185126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.185334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.185512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.185537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.185718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.185943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.185969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.186150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.186330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.186355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.186534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.186685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.186710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.186963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.187143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.187168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.187386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.187621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.187648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.187818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.188019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.188048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.188217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.188421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.188450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.188671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.188851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.188887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.189038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.189214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.189240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.189422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.189606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.189632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.189829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.190025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.190051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.190267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.190416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.190441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.190628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.190807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.190833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.191019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.191221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.191249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.191439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.191605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.191633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.191823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.192012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.192038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.192206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.192393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.192421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.192631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.192829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.192853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.193042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.193222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.193247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.193422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.193582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.193609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.193840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.194054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.194084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.194279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.194486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.194513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.194693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.194891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.194918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.195123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.195276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.195301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.195512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.195694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-07-14 04:02:22.195719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-07-14 04:02:22.195943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.196110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.196138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.196360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.196551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.196580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.196751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.196904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.196948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.197169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.197360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.197387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.197587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.197757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.197785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.197958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.198163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.198193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.198381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.198580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.198608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.198799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.198991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.199021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.199219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.199408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.199435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.199620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.199809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.199837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.200080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.200258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.200283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.200480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.200683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.200708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.200880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.201059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.201084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.201236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.201410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.201434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.201584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.201761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.201787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.202025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.202200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.202233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.202398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.202590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.202618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.202838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.203018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.203047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.203243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.203446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.203475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.203652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.203829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.203855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.204030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.204264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.204292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.204483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.204649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.204678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.204900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.205103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.205128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.205283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.205474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.205501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.205649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.205863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.205898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.206097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.206288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.206323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.206495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.206724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.206752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.206949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.207143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.207171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.207367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.207558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.207586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.207747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.207915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.207945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-07-14 04:02:22.208140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-07-14 04:02:22.208291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.208317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.208495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.208702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.208727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.208878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.209078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.209106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.209316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.209509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.209537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.209735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.209889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.209915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.210070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.210212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.210241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.210422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.210634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.210663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.210829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.211006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.211036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.211252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.211424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.211465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.211658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.211861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.211894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.212045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.212242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.212270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.212464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.212660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.212688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.212888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.213084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.213112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.213310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.213506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.213537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.213731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.213907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.213937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.214138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.214312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.214340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.214547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.214740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.214767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.214966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.215161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.215189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.215381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.215550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.215577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.215742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.215920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.215946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.216130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.216278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.216304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.216512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.216688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.216713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.216864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.217056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.217083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.217290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.217445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.217471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.217619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.217793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.217823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.218071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.218212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.218238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.218417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.218567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.218609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.218805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.219006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.219035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.219234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.219386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.219413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.219646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.219828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.219853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-07-14 04:02:22.220019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-07-14 04:02:22.220227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.220252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.220461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.220648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.220672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.220861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.221059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.221086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.221272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.221449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.221490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.221660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.221877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.221902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.222078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.222266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.222304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.222494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.222692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.222717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.222921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.223113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.223143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.223307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.223484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.223511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.223744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.223918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.223947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.224114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.224287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.224329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.224564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.224749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.224777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.224965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.225175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.225200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.225375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.225554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.225580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.225730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.225991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.226033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.226199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.226370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.226398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.226573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.226793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.226821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.227039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.227246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.227273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.227447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.227670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.227698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.227864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.228075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.228103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.228300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.228472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.228501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.228721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.228888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.228917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.229099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.229296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.229324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.229521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.229711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.229737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.229894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.230048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.230089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.230295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.230496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.230521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.230719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.230873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.230900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.231103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.231312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.231341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.231573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.231797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.231825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.232033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.232229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.232259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-07-14 04:02:22.232457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-07-14 04:02:22.232678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.232706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.232894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.233110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.233136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.233295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.233519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.233546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.233722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.234007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.234036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.234248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.234413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.234437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.234599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.234796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.234821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.235031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.235237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.235267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.235464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.235675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.235703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.235881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.236082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.236111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.236307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.236481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.236511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.236716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.236926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.236953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.237128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.237281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.237308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.237487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.237682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.237712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.237909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.238106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.238134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.238329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.238504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.238532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.238725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.238925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.238951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.239157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.239333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.239358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.239530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.239709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.239735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.239889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.240089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.240117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.240318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.240520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.240545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.240728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.240905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.240932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.241086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.241327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.241355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.241529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.241726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.241754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.241951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.242120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.242148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-07-14 04:02:22.242344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-07-14 04:02:22.242544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.242572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.242767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.242967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.242996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.243192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.243393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.243421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.243615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.243813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.243840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.244047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.244227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.244252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.244424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.244627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.244653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.244857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.245085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.245111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.245296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.245496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.245521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.245672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.245853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.245887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.246064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.246245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.246273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.246434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.246662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.246691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.246892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.247075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.247100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.247307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.247487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.247512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.247734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.247881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.247908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.248090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.248277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.248305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.248502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.248694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.248723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.248916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.249094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.249120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.249300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.249474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.249501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.249703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.249908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.249934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.250083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.250261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.250287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.250493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.250700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.250743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.250939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.251100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.251128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.251316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.251541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.251569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.251776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.251926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.251952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.252104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.252337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.252364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.252533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.252728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.252753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.252935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.253117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.253142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.253371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.253544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.253574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.253804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.253979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.254005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.254184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.254335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.254361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-07-14 04:02:22.254569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.254783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-07-14 04:02:22.254811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.255009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.255288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.255345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.255556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.255764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.255792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.256016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.256239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.256265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.256419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.256630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.256656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.256863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.257059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.257087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.257284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.257465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.257490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.257638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.257824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.257850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.258014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.258219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.258245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.258432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.258658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.258686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.258892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.259113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.259139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.259313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.259548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.259577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.259808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.260010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.260043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.260225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.260429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.260454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.260633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.260781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.260806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.261023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.261205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.261230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.261374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.261579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.261604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.261789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.261990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.262019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.262217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.262419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.262444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.262624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.262815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.262840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.263076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.263283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.263311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.263508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.263752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.263808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.264048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.264238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.264268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.264424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.264602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.264627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.264805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.264980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.265006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.265160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.265333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.265359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.265540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.265690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.265715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.265961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.266136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.266164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.266386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.266587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.266613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.266891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.267063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.267088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.267248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.267450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-07-14 04:02:22.267480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-07-14 04:02:22.267682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.267873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.267913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.268123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.268278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.268308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.268496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.268719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.268747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.268961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.269147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.269173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.269394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.269583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.269611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.269784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.269967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.269996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.270226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.270423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.270452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.270627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.270809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.270853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.271065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.271222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.271248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.271401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.271602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.271627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.271834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.272049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.272075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.272335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.272521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.272550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.272729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.272986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.273013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.273190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.273380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.273408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.273634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.273777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.273802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.273986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.274250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.274275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.274506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.274708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.274735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.274889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.275083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.275108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.275288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.275519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.275548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.275772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.275991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.276020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.276231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.276411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.276436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.276611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.276760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.276786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.276941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.277126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.277156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.277334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.277509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.277535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.277739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.277903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.277932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.278143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.278323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.278348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.278531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.278706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.278733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.278916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.279093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.279118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.279321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.279519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.279547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.279754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.279953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.279984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-07-14 04:02:22.280189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-07-14 04:02:22.280375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.280403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.280598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.280752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.280779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.280993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.281193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.281218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.281369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.281556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.281581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.281792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.281986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.282015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.282179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.282353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.282378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.282578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.282726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.282751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.282930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.283109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.283134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.283318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.283496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.283537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.283739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.283951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.283980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.284176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.284383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.284411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.284585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.284762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.284792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.284970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.285130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.285158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.285329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.285503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.285528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.285682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.285855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.285887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.286092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.286310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.286337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.286509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.286695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.286721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.286913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.287080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.287109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.287301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.287475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.287503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.287730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.287918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.287947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.288142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.288307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.288336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.288511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.288683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.288708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.288920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.289117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.289142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.289327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.289507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.289549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.289718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.289925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.289954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-07-14 04:02:22.290150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.290295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-07-14 04:02:22.290321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.290503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.290710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.290735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.290912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.291120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.291145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.291357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.291580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.291608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.291781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.291960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.291987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.292163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.292367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.292410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.292609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.292782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.292809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.292997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.293204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.293230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.293377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.293531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.293572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.293770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.293961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.293990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.294224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.294421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.294449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.294671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.294857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.294892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.295110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.295322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.295346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.295496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.295701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.295726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.295927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.296154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.296182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.296350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.296640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.296693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.296920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.297128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.297154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.297360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.297587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.297615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.297844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.298075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.298103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.298315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.298625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.298679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.298934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.299112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.299137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.299339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.299533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.299561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.299784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.299988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.300017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.300219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.300441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.300469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.300688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.300840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.300899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.301102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.301300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.301328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.301526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.301701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.301727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.301935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.302125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.302152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.302319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.302523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.302551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.302779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.302974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.303004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.303202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.303363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-07-14 04:02:22.303393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-07-14 04:02:22.303592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.303811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.303839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.304051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.304195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.304239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.304435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.304606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.304633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.304824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.305033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.305061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.305278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.305502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.305530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.305710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.305935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.305964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.306166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.306361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.306389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.306557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.306755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.306783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.306950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.307146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.307174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.307373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.307551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.307576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.307732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.307909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.307935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.308141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.308350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.308378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.308580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.308802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.308830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.309066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.309224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.309250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.309421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.309640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.309668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.309850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.310066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.310092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.310293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.310476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.310502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.310683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.310836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.310861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.311050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.311250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.311275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.311435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.311587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.311612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.311765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.312000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.312029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.312266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.312441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.312465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.312639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.312847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.312880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.313038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.313215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.313240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.313392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.313547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.313574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.313810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.314012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.314038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.314203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.314407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.314436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.314660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.314874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.314900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.315083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.315275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.315300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.315479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.315647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.315677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-07-14 04:02:22.315848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.316031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-07-14 04:02:22.316061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.316265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.316487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.316515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.316736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.316972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.316999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.317182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.317322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.317347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.317554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.317702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.317728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.317883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.318061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.318086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.318237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.318423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.318448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.318625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.318803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.318828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.319037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.319235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.319266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.319466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.319640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.319669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.319893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.320061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.320089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.320283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.320454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.320494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.320692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.320853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.320893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.321089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.321259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.321288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.321523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.321708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.321733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.321889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.322072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.322098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.322283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.322498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.322523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.322700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.322876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.322901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.323077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.323258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.323283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.323471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.323644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.323673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.323913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.324118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.324158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.324389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.324655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.324706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.324944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.325170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.325200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.325391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.325647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.325704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.325903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.326139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.326165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.326336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.326509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.326537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.326730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.326962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.326993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.327182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.327408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.327459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.327633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.327812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.327838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.328074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.328282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.328310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-07-14 04:02:22.328500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.328703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-07-14 04:02:22.328729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.328910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.329115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.329143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.329312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.329509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.329537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.329731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.329901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.329931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.330127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.330314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.330341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.330535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.330719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.330745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.330972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.331169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.331202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.331430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.331626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.331652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.331859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.332052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.332078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.332258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.332439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.332465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.332703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.332883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.332909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.333138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.333304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.333332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.333499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.333675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.333705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.333929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.334102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.334130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.334353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.334548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.334576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.334769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.334938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.334968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.335189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.335384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.335417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.335646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.335878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.335907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.336129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.336327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.336355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.336576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.336766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.336794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.336992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.337219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.337245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.337461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.337662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.337713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.337943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.338123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.338163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.338367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.338591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.338619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.338817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.339023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.339052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.339282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.339437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.339462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.339663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.339840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.339882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.340066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.340285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.340313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-07-14 04:02:22.340508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.340796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-07-14 04:02:22.340824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.341043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.341313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.341364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.341558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.341752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.341780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.341971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.342143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.342171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.342354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.342609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.342637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.342874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.343051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.343079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.343247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.343442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.343471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.343642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.343884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.343924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.344130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.344323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.344352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.344787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.344993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.345022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.345216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.345379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.345406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.345607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.345776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.345806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.346005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.346229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.346257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.346484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.346702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.346727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.346937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.347163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.347191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.347431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.347724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.347752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.347947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.348124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.348167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.348372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.348585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.348636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.348828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.349063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.349092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.349403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.349675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.349704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.349916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.350074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.350099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.350307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.350483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.350511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.350673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.350873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.350902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.351093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.351256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.351284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.351478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.351700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.351728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.351926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.352099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.352126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.352297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.352491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.352519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.352693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.352862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.352897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-07-14 04:02:22.353107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.353266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-07-14 04:02:22.353295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.353470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.353677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.353702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.353927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.354133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.354159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.354334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.354563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.354590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.354786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.354980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.355008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.355185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.355331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.355356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.355591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.355813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.355841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.356080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.356241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.356266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.356448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.356646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.356674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.356876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.357058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.357083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.357234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.357383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.357408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.357641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.357839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.357880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.358066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.358243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.358268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.358427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.358596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.358624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.358821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.359030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.359061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.359254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.359416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.359446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.359641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.359837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.359873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.360050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.360246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.360275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.360469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.360667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.360695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.360895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.361099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.361124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.361306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.361455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.361482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.361671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.361876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.361905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.362097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.362298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.362326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.362498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.362728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.362753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.362961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.363168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.363193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.363371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.363720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.363777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.364009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.364190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.364215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.364366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.364516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.364541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.364686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.364874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.364900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.365053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.365229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.365259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.365456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.365657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-07-14 04:02:22.365685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-07-14 04:02:22.365895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.366069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.366098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.366300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.366498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.366525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.366749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.366920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.366948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.367142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.367340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.367368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.367569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.367752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.367778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.368021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.368214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.368244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.368443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.368599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.368625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.368871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.369064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.369092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.369470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.369819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.369877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.370103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.370391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.370452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.370682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.370854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.370891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.371071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.371252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.371281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.371482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.371685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.371710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.371888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.372082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.372111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.372279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.372436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.372463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.372692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.372890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.372919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.373122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.373319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.373346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.373537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.373701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.373729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.373930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.374087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.374113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.374315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.374488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.374516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.374720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.374916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.374946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.375170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.375332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.375362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.375545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.375747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.375772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.375956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.376180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.376208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.376525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.376755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.376783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.376983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.377139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.377164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.377370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.377666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.377725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.377960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.378188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.378216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.378547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.378760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.378790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.378992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.379207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.379258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-07-14 04:02:22.379574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-07-14 04:02:22.379792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.379820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.380059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.380256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.380283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.380482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.380655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.380683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.380855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.381089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.381117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.381290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.381504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.381544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.381740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.381969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.381998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.382278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.382589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.382647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.382851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.383050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.383079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.383255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.383501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.383553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.383723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.383916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.383945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.384141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.384347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.384373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.384581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.384751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.384778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.384960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.385146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.385172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.385433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.385616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.385655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.385875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.386040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.386067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.386287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.386460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.386490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.386664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.386877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.386919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.387107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.387330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.387358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.387603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.387815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.387845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.388063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.388390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.388452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.388666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.388883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.388914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.389091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.389284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.389312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.389503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.389715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.389743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.389970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.390120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.390161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.390465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.390669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.390697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.390898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.391076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.391101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.391329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.391523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.391551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.391718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.391919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.391947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.392120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.392311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.392337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.392537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.392765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.392793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-07-14 04:02:22.393061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.393408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-07-14 04:02:22.393472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.393663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.393872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.393898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.394105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.394401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.394470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.394667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.394859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.394895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.395096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.395318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.395345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.395512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.395817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.395889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.396139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.396562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.396611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.396841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.397051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.397080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.397252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.397476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.397503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.397698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.397896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.397925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.398090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.398243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.398288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.398486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.398703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.398731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.398884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.399040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.399065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.399294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.399489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.399519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.399719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.399895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.399924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.400125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.400350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.400379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.400605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.400777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.400807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.401012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.401235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.401263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.401473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.401675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.401709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.401907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.402075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.402105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.402310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.402513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.402546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.402744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.402918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.402943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.403154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.403326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.403353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.403512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.403707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.403735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.403915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.404136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.404164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.404367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.404639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.404690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-07-14 04:02:22.404922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.405117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-07-14 04:02:22.405145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.405341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.405563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.405588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.405781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.406039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.406069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.406277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.406505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.406531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.406714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.406945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.406981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.407175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.407337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.407364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.407524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.407714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.407744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.407951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.408123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.408152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.408352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.408546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.408574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.408789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.408967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.408993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.409176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.409376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.409405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.409596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.409791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.409818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.410029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.410229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.410257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.410489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.410714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.410742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.410936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.411083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.411108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.411320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.411519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.411547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.411771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.411981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.412006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.412191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.412391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.412419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.412618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.412819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.412847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.413031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.413203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.413232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.413426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.413735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.413795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.414018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.414202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.414228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.414418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.414649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.414677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.414889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.415121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.415149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.415356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.415626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.415677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.415878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.416048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.416079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.416281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.416474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.416503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.416704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.416897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.416926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.417129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.417325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.417353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.417547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.417749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.417777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.417997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.418196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-07-14 04:02:22.418224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-07-14 04:02:22.418448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.418635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.418663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.418860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.419047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.419072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.419269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.419462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.419489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.419679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.419884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.419915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.420132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.420369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.420394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.420547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.420748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.420776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.420978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.421173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.421201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.421365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.421564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.421590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.421767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.421954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.421984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.422193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.422346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.422375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.422586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.422750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.422779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.422978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.423171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.423199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.423423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.423593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.423624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.423806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.424019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.424050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.424255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.424430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.424460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.424693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.424850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.424884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.425035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.425229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.425260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.425469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.425641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.425681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.425898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.426122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.426166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.426368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.426564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.426592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.426759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.426953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.426984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.427213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.427411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.427439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.427664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.427860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.427895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.428129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.428324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.428349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.428550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.428751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.428780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.428986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.429185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.429216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.429411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.429610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.429635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.429792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.429940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.429967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.430164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.430359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.430386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.430584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.430791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.430821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-07-14 04:02:22.431058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-07-14 04:02:22.431220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.431248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.431447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.431624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.431665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.431901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.432095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.432123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.432322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.432523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.432552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.432713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.432888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.432913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.433098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.433272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.433297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.433504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.433730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.433758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.433992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.434248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.434299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.434499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.434698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.434728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.434928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.435090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.435118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.435326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.435548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.435575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.435774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.435950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.435979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.436150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.436351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.436379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.436582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.436786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.436811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.437008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.437216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.437245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.437453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.437682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.437710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.437922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.438077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.438102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.438333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.438553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.438581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.438800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.438994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.439023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.439207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.439383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.439424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.439644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.439839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.439873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.440077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.440271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.440299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.440526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.440718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.440746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.440973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.441178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.441207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f953c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.441423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.441703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.441756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.441997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.442199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.442224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.442403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.442636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.442685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.442891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.443096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.443124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.443493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.443854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.443911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.444122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.444432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.444488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-07-14 04:02:22.444718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.444896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-07-14 04:02:22.444921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.445093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.445370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.445435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.445780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.446019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.446045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.446205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.446387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.446412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.446599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.446817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.446845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.447084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.447391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.447448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.447774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.448032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.448057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.448261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.448444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.448471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.448704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.448930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.448958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.449138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.449280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.449305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.449493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.449802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.449859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.450088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.450457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.450507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.450705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.450878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.450906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.451114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.451394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.451421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.451760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.452006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.452034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.452280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.452559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.452586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.452787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.452995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.453020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.453178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.453403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.453430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.453834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.454064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.454090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.454334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.454745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.454803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.455013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.455213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.455241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.455437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.455629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.455658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.455852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.456037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.456065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.456386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.456701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.456729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.456939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.457139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.457168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.457366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.457638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.457684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-07-14 04:02:22.457912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.458086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-07-14 04:02:22.458113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.458477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.458856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.458953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.459164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.459557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.459612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.459818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.460024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.460052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.460248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.460450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.460474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.460632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.460805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.460830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.460992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.461193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.461220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.461423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.461623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.461651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.461848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.462031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.462059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.462332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.462651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.462676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.462829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.463005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.463032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.463207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.463384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.463408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.463609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.463805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.463833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.464016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.464198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.464223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.464425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.464636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.464662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.464841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.465044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.465072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.465271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.465549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.465599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.465802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.466002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.466028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.466258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.466653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.466724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.466943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.467163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.467191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.467413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.467685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.467711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.467896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.468095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.468122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.468321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.468508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.468535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.468734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.468913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.468940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.469161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.469387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.469414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.469659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.469842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.469878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.470071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.470294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.470322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.470493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.470646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.470686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.470850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.471071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.471100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.471287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.471588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.471646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.471845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.472023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-07-14 04:02:22.472051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-07-14 04:02:22.472258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.472433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.472458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.472636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.472852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.472895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.473121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.473514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.473578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.473805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.474007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.474047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.474273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.474461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.474487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.474668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.474890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.474920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.475097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.475412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.475466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.475697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.475879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.475909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-07-14 04:02:22.476093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.476256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-07-14 04:02:22.476284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.846 [2024-07-14 04:02:22.476441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.846 [2024-07-14 04:02:22.476680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.846 [2024-07-14 04:02:22.476732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.846 qpair failed and we were unable to recover it. 00:30:03.846 [2024-07-14 04:02:22.476939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.846 [2024-07-14 04:02:22.477145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.477171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.477370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.477567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.477596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.477772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.477960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.478003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.478231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.478446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.478473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.478764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.478966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.478991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.479221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.479415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.479443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.479627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.479801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.479825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.480011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.480189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.480224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.480419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.480618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.480646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.480811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.480991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.481019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.481248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.481446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.481474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.481696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.481870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.481895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.482076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.482307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.482352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.482588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.482768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.482795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.482976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.483180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.483204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.483365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.483563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.483590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.483762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.483934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.483962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.484155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.484376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.484403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.484604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.484755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.484780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.484985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.485179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.485203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.485429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.485635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.485659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.485840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.485996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.486021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.486230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.486431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.486459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.486675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.486849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.486880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.487029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.487175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.487199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.487402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.487621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.487646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.487826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.488013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.488039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.488238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.488455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.488482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.488695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.488853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.488884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.489037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.489230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.489258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.847 qpair failed and we were unable to recover it. 00:30:03.847 [2024-07-14 04:02:22.489482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.847 [2024-07-14 04:02:22.489680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.489708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.489901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.490049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.490075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.490272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.490475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.490503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.490696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.490871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.490896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.491047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.491192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.491233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.491452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.491774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.491830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.492042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.492260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.492306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.492578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.492751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.492778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.492960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.493108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.493133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.493361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.493670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.493732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.493929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.494133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.494160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.494378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.494623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.494711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.494938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.495117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.495145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.495374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.495559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.495587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.495810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.495998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.496023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.496253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.496470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.496498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.496717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.496872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.496897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.497045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.497339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.497391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.497723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.497948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.497973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.498201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.498485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.498512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.498738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.498947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.498972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.499170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.499363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.499387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.499612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.499804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.499829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.499993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.500140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.500182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.500352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.500608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.500660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.500882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.501049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.501076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.501332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.501478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.501504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.501668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.501888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.501913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.502093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.502282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.502310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.502497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.502715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.502743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.848 qpair failed and we were unable to recover it. 00:30:03.848 [2024-07-14 04:02:22.502938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.503136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.848 [2024-07-14 04:02:22.503165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.503409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.503608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.503636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.503823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.504013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.504040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.504222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.504489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.504539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.504735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.504888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.504914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.505101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.505275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.505302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.505522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.505701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.505726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.505905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.506052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.506077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.506286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.506530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.506576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.506756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.506959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.506987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.507160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.507338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.507363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.507566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.507740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.507767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.507976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.508144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.508170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.508401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.508619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.508644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.508827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.509009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.509037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.509260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.509410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.509434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.509637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.509845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.509888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.510048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.510278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.510323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.510538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.510688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.510713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.510893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.511101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.511126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.511276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.511472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.511537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.511708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.511902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.511944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.512170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.512436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.512486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.512686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.512860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.512892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.513061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.513325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.513375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.513630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.513799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.513824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.514008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.514182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.514211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.514405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.514555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.514581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.514768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.514979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.515004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.515151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.515304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.515330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.515486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.515688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.849 [2024-07-14 04:02:22.515713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.849 qpair failed and we were unable to recover it. 00:30:03.849 [2024-07-14 04:02:22.515911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.516109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.516137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.516339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.516617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.516667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.516862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.517054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.517080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.517260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.517441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.517465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.517670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.517876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.517921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.518076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.518360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.518385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.518544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.518692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.518719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.518900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.519062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.519087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.519278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.519607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.519654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.519822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.520017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.520045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.520240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.520393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.520419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.520629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.520806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.520831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.520989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.521140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.521165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.521348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.521683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.521732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.521957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.522151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.522178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.522352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.522567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.522594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.522813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.523018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.523043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.523249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.523462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.523489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.523655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.523849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.523886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.524094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.524247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.524271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.524469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.524721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.524777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.525002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.525238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.525297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.525520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.525720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.525747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.525928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.526080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.526122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.526324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.526517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.526544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.526768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.526962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.526991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.527188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.527350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.527377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.527575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.527775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.527803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.527997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.528154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.528182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.528372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.528534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.528561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.850 qpair failed and we were unable to recover it. 00:30:03.850 [2024-07-14 04:02:22.528783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.850 [2024-07-14 04:02:22.528953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.528981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.529194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.529370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.529394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.529573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.529792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.529819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.530010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.530166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.530190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.530343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.530501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.530525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.530680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.530896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.530938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.531148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.531324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.531348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.531527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.531706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.531731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.531885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.532062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.532087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.532233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.532424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.532449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.532628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.532811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.532836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.533020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.533200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.533227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.533438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.533589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.533614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.533769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.533919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.533944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.534096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.534246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.534271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.534450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.534605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.534630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.534813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.534997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.535023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.535196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.535343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.535371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.535550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.535720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.535744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.535925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.536084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.536109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.536280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.536431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.536457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.536610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.536815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.536840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.536993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.537150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.537174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.537357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.537529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.537553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-07-14 04:02:22.537716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.537900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-07-14 04:02:22.537925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.538080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.538292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.538316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.538490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.538806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.538858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.539080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.539228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.539256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.539435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.539606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.539633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.539845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.540068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.540093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.540284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.540492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.540517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.540700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.540880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.540907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.541081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.541305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.541352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.541550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.541751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.541776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.541968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.542169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.542196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.542470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.542654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.542678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.542846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.543025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.543053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.543225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.543445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.543477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.543658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.543830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.543854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.544041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.544243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.544268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.544419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.544567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.544593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.544795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.544996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.545024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.545197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.545366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.545395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.545680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.545925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.545953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.546150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.546339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.546366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.546562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.546741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.546766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.546917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.547098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.547123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.547272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.547449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.547478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.547690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.547833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.547858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.548024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.548202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.548249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.548451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.548659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.548700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.548896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.549071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.549099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.549275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.549453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.549477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.549653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.549827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.549851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.550102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.550259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-07-14 04:02:22.550283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-07-14 04:02:22.550464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.550716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.550765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.550969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.551144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.551170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.551355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.551554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.551581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.551786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.551939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.551965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.552120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.552331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.552358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.552579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.552733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.552757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.552941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.553117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.553142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.553344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.553635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.553684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.553881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.554052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.554080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.554295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.554474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.554498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.554681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.554874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.554899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.555062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.555243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.555271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.555499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.555651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.555677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.555891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.556071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.556099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.556267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.556442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.556467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.556641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.556814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.556839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.557002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.557181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.557223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.557427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.557650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.557715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.557931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.558112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.558137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.558334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.558510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.558534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.558718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.558913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.558941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.559106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.559361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.559411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.559607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.559805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.559832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.560010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.560181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.560208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.560446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.560719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.560766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.560960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.561183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.561228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.561441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.561585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.561609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.561768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.561984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.562009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.562179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.562380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.562404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.562610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.562804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.562831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-07-14 04:02:22.563035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-07-14 04:02:22.563179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.563204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.563376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.563527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.563552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.563736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.563886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.563911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.564067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.564216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.564242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.564416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.564590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.564615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.564803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.565049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.565074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.565274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.565553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.565598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.565829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.565996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.566022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.566173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.566378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.566423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.566625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.566839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.566876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.567105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.567302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.567327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.567506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.567707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.567731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.567935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.568083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.568107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.568263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.568417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.568441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.568618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.568766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.568807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.568981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.569205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.569233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.569442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.569590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.569615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.569767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.569921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.569963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.570172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.570348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.570376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.570602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.570792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.570818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.570988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.571133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.571158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.571357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.571567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.571612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.571838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.572043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.572072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.572280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.572450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.572479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.572678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.572856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.572922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.573110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.573273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.573298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.573475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.573653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.573678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.573881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.574106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.574133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.574309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.574489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.574513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.574716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.574877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.574902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-07-14 04:02:22.575109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.575312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-07-14 04:02:22.575344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.575581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.575738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.575765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.575977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.576133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.576157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.576342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.576493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.576533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.576735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.576907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.576935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.577155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.577326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.577352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.577568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.577737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.577765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.577924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.578094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.578121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.578354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.578551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.578596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.578813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.578979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.579007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.579204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.579387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.579412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.579563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.579711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.579736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.579909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.580086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.580110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.580319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.580618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.580675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.580889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.581086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.581111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.581289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.581468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.581495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.581673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.581879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.581904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.582114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.582349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.582394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.582626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.582831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.582858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.583032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.583225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.583252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.583553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.583773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.583800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.584026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.584218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.584247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.584436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.584611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.584635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.584813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.585028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.585057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.585230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.585415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.585439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-07-14 04:02:22.585624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.585802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-07-14 04:02:22.585826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.585998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.586176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.586213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.586417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.586747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.586799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.586997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.587192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.587220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.587411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.587594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.587619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.587802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.587958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.587984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.588161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.588338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.588363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.588515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.588688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.588715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.588934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.589108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.589135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.589329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.589480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.589505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.589708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.589905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.589935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.590134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.590390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.590443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.590641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.590834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.590862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.591076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.591305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.591333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.591555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.591748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.591777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.592050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.592231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.592257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.592436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.592675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.592703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.592916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.593142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.593170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.593399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.593697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.593746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.593950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.594153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.594178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.594411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.594616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.594643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.594810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.594973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.594998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.595153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.595361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.595386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.595587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.595775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.595802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.595988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.596187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.596214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.596389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.596570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.596594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.596773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.596949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.596974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.597148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.597334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.597359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.597551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.597727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.597756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.597957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.598104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.598128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-07-14 04:02:22.598369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.598544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-07-14 04:02:22.598569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.598749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.598941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.598969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.599136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.599310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.599336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.599515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.599717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.599741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.599922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.600114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.600144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.600359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.600533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.600558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.600761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.600950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.600978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.601152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.601328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.601352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.601531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.601706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.601734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.601906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.602162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.602187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.602336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.602514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.602540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.602712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.602914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.602940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.603137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.603383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.603438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.603615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.603818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.603843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.604038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.604220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.604244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.604400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.604602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.604630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.604875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.605069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.605097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.605338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.605611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.605657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.605897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.606101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.606129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.606327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.606504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.606549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.606721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.606937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.606965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.607131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.607356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.607380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.607589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.607736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.607761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.607965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.608167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.608209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.608399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.608596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.608622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.608801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.608984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.609009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.609161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.609337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.609378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.609609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.609814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.609841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.610084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.610412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.610464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.610810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.611016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.611046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-07-14 04:02:22.611210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.611420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-07-14 04:02:22.611445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.611637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.611812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.611837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.612002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.612161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.612201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.612511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.612736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.612762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.612959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.613131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.613158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.613363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.613565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.613592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.613816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.614015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.614043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.614247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.614492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.614523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.614713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.614924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.614950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.615136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.615306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.615372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.615643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.615840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.615873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.616073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.616293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.616320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.616490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.616710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.616737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.616943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.617212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.617264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.617466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.617657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.617686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.617880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.618077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.618105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.618302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.618471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.618500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.618698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.618894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.618928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.619163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.619362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.619423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.619700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.619955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.619983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.620206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.620471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.620520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.620713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.620899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.620924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.621119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.621333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.621383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.621678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.621892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.621928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.622161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.622383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.622428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.622628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.622824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.622851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.623094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.623417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.623465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.623820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.624022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.624049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.624269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.624457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.624484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.624686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.624896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.624924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.625092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.625307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.625331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-07-14 04:02:22.625638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-07-14 04:02:22.625888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.625919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.626120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.626509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.626567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.626801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.626984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.627012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.627206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.627460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.627508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.627775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.628019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.628044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.628251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.628485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.628531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.628755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.629020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.629048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.629252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.629451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.629475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.629664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.629870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.629898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.630116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.630401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.630474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.630692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.630864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.630898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.631120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.631315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.631341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.631593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.631808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.631835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.632046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.632246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.632291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.632478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.632674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.632699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.632895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.633094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.633121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.633434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.633820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.633899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.634127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.634422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.634454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.634691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.634861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.634897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.635087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.635286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.635313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.635485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.635739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.635768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.636003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.636283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.636334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.636531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.636852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.636923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.637150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.637322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.637349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.637670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.637926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.637955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.638155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.638352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.638379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.638582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.638808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.638836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.639041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.639383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.639445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-07-14 04:02:22.639681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.639881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-07-14 04:02:22.639907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.640086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.640319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.640347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.640527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.640729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.640754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.640985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.641193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.641218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.641395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.641584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.641608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.641780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.641971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.642026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.642202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.642418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.642466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.642696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.642884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.642910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.643126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.643274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.643299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.643479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.643659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.643686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.643871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.644055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.644098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.644310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.644487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.644512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.644692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.644894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.644920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.645155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.645312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.645339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.645563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.645764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.645792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.646020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.646283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.646334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.646645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.646890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.646918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.647125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.647322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.647351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.647559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.647741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.647766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.647950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.648157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.648185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.648419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.648654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.648679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.648852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.649083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.649111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.649338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.649631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.649691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.649893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.650120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.650147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.650381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.650687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.650737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.650963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.651290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.651339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.651534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.651754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.651782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.651974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.652185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.652210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-07-14 04:02:22.652438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-07-14 04:02:22.652633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.652657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.652841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.653054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.653082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.653288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.653499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.653524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.653759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.654031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.654060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.654292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.654502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.654549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.654742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.654918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.654946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.655178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.655419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.655465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.655689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.655907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.655935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.656198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.656395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.656422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.656612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.656834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.656861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.657046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.657236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.657263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.657435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.657657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.657684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.657919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.658102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.658127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.658329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.658475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.658516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.658742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.658957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.658985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.659191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.659370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.659411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.659652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.659827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.659853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.660060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.660266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.660293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.660487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.660646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.660670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.660911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.661102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.661126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.661313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.661541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.661601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.661808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.662033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.662062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.662252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.662442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.662467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.662649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.662824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.662852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.663064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.663260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.663287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.663523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.663719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.663748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.663949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.664105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.664146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.664337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.664642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.664702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.664924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.665319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.665375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.665600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.665772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.665799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-07-14 04:02:22.665970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.666168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-07-14 04:02:22.666194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.666393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.666604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.666631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.666821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.667030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.667058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.667255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.667462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.667486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.667663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.667854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.667884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.668120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.668485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.668542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.668907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.669103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.669130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.669351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.669646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.669711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.669908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.670113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.670140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.670344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.670564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.670590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.670815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.671015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.671043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.671243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.671436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.671463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.671665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.671896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.671928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.672122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.672364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.672416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.672809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.673056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.673082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.673285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.673600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.673659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.673858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.674051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.674076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.674278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.674502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.674551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.674894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.675114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.675141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.675316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.675549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.675606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.675840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.676008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.676033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.676179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.676335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.676374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.676608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.676831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.676859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.677103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.677441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.677502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.677856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.678107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.678131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.678357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.678628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.678675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.678880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.679075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.679102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.679325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.679524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.679574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.679796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.680026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.680054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.680263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.680558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.680617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-07-14 04:02:22.680820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.681017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-07-14 04:02:22.681046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.681210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.681487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.681539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.681765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.681987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.682020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.682195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.682413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.682440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.682692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.682977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.683005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.683201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.683425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.683452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.683654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.683807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.683831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.684018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.684174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.684200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.684428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.684809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.684857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.685064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.685308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.685364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.685560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.685724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.685751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.685952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.686153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.686178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.686470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.686846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.686907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.687132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.687446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.687497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.687729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.687952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.687981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.688187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.688360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.688387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.688721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.688931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.688958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.689140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.689423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.689480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.689695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.689919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.689946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.690122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.690299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.690324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.690477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.690684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.690709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.690915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.691169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.691198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.691403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.691624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.691673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.691899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.692099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.692147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.692423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.692755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.692815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.693023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.693180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.693206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.693411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.693661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.693714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.693944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.694230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.694287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.694487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.694668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.694711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.694939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.695215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.695264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.695461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.695667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.695695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-07-14 04:02:22.695891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-07-14 04:02:22.696146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.696195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.696390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.696594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.696618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.696822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.697018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.697046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.697275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.697450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.697477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.697722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.697895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.697921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.698101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.698299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.698324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.698527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.698759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.698786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.699028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.699204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.699229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.699433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.699717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.699774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.699968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.700165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.700192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.700418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.700616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.700643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.700854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.701036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.701066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.701294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.701470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.701495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.701675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.701827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.701852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.702096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.702296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.702320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.702501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.702648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.702673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.702846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.703035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.703063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.703286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.703625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.703681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.703880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.704077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.704106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.704312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.704462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.704488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.704683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.704904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.704933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.705314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.705659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.705717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.705932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.706127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.706155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.706354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.706536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.706561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.706714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.706911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.706939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.707132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.707338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.707363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-07-14 04:02:22.707567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-07-14 04:02:22.707719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.707745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.707955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.708229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.708277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.708500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.708726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.708751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.708955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.709295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.709339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.709551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.709723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.709752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.709980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.710304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.710348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.710561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.710753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.710780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.710976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.711229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.711254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.711452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.711660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.711686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.711887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.712057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.712083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.712317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.712637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.712708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.712881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.713064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.713088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.713265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.713448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.713473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.713619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.713769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.713794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.713977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.714214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.714275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.714665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.714909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.714939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.715168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.715473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.715534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.715741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.715898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.715923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.716132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.716393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.716446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.716708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.716878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.716903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.717087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.717288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.717314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.717491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.717793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.717857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.718097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.718398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.718462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.718726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.718928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.718957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.719118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.719286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.719313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.719539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.719714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.719741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.719943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.720276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.720351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.720748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.720978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.721004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.721204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.721441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.721501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.721697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.721912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-07-14 04:02:22.721940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-07-14 04:02:22.722175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.722415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.722439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.722637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.722902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.722930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.723124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.723398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.723448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.723650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.723878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.723905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.724104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.724315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.724340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.724524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.724694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.724718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.724971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.725263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.725311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.725511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.725737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.725764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.725966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.726137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.726178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.726485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.726717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.726742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.726969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.727228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.727280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.727483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.727701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.727728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.727921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.728092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.728119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.728329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.728641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.728693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.728918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.729239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.729301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.729530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.729877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.729929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.730152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.730328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.730356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.730532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.730861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.730950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.731184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.731519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.731579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.731818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.732020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.732048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.732269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.732465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.732491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.732833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.733104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.733132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.733306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.733598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.733658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.733884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.734092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.734117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.734298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.734566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.734618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.734814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.735018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.735045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.735220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.735464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.735515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.735688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.735870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.735896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.736077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.736243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.736270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.736441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.736610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.736637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-07-14 04:02:22.736825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-07-14 04:02:22.737026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.737053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.737219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.737434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.737462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.737659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.737886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.737914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.738169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.738585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.738640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.738864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.739090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.739117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.739318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.739576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.739625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.739829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.740017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.740045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.740248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.740406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.740447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.740662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.740889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.740918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.741153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.741306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.741331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.741507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.741759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.741807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.741973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.742189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.742214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.742371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.742580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.742608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.742835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.743036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.743061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.743258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.743430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.743457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.743673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.743880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.743908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.744080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.744315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.744340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.744492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.744645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.744671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.744846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.745024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.745049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.745395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.745806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.745864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.746068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.746276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.746301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.746507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.746698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.746724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.746979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.747191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.747218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.747417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.747596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.747622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.747775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.747970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.747996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.748173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.748346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.748370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.748598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.748803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.748835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.749039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.749229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.749258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.749477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.749710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.749734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.749943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.750290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.750346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-07-14 04:02:22.750546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.750738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-07-14 04:02:22.750765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.751009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.751352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.751401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.751626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.751817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.751844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.752079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.752439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.752496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.752735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.752920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.752945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.753279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.753683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.753740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.753916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.754090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.754121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.754296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.754483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.754527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.754748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.754940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.754968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.755170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.755321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.755346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.755525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.755766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.755791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.755946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.756210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.756260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.756484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.756793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.756845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.757028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.757227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.757254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.757483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.757753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.757801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.758009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.758191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.758218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.758445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.758788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.758853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.759063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.759338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.759385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.759609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.759805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.759834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.760037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.760358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.760413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.760607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.760841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.760872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.761074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.761430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.761473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.761712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.761914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.761941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.762145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.762327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.762353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.762582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.762773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.762801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.762966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.763164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.763192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-07-14 04:02:22.763389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-07-14 04:02:22.763563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.763591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.763745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.763932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.763957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.764164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.764384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.764436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.764681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.764958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.765015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.765210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.765429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.765457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.765634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.765804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.765829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.766040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.766236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.766263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.766486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.766731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.766758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.766958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.767187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.767215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.767419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.767645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.767672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.767916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.768089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.768117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.768322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.768463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.768488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.768640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.768817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.768844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.769040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.769198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.769222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.769382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.769541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.769567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.769793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.769964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.769992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.770183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.770406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.770433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.770654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.770880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.770909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.771114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.771365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.771390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.771640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.771838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.771880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.772098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.772320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.772347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.772545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.772789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-07-14 04:02:22.772840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-07-14 04:02:22.773071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.773297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.773323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.142 qpair failed and we were unable to recover it. 00:30:04.142 [2024-07-14 04:02:22.773575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.773819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.773846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.142 qpair failed and we were unable to recover it. 00:30:04.142 [2024-07-14 04:02:22.774020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.774213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.774241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.142 qpair failed and we were unable to recover it. 00:30:04.142 [2024-07-14 04:02:22.774413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.774587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.774611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.142 qpair failed and we were unable to recover it. 00:30:04.142 [2024-07-14 04:02:22.774792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.774940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.774966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.142 qpair failed and we were unable to recover it. 00:30:04.142 [2024-07-14 04:02:22.775202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.775447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.775512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.142 qpair failed and we were unable to recover it. 00:30:04.142 [2024-07-14 04:02:22.775673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.142 [2024-07-14 04:02:22.775877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.775902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.776104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.776280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.776307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.776502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.776806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.776861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.777099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.777287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.777315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.777520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.777744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.777768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.777946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.778095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.778120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.778331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.778538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.778562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.778759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.778986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.779014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.779229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.779440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.779465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.779642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.779850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.779887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.780083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.780245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.780271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.780422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.780574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.780617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.780842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.781057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.781087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.781274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.781480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.781506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.781712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.781894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.781921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.782136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.782482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.782530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.782754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.782980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.783009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.783207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.783462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.783517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.783717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.783875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.783915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.784109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.784488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.784540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.784741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.784969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.784999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.785212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.785393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.785419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.785623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.785828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.785857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.786083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.786304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.786330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.786510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.786757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.786785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.787018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.787196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.787222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.787399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.787554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.787580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.787737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.787941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.787981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.788201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.788466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.788496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.788714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.788919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.788946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.143 qpair failed and we were unable to recover it. 00:30:04.143 [2024-07-14 04:02:22.789136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.143 [2024-07-14 04:02:22.789367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.789396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.789618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.789791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.789817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.789997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.790223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.790252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.790466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.790745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.790804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.790978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.791192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.791219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.791402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.791583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.791626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.791850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.792052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.792082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.792252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.792459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.792485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.792688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.792872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.792899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.793089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.793337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.793399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.793630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.793823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.793851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.794098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.794253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.794279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.794482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.794637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.794663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.794900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.795109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.795138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.795361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.795585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.795614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.795795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.796019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.796047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.796227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.796426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.796453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.796607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.796784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.796811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.796990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.797145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.797171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.797374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.797662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.797726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.797938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.798131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.798159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.798548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.798802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.798831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.799067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.799241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.799268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.799450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.799716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.799768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.799963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.800215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.800273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.800585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.800812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.800840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.801075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.801261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.801289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.801491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.801643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.801670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.801847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.802042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.802069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.802255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.802410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.802437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.144 qpair failed and we were unable to recover it. 00:30:04.144 [2024-07-14 04:02:22.802656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.144 [2024-07-14 04:02:22.802883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.802912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.803121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.803299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.803325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.803559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.803765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.803794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.803994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.804205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.804233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.804434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.804680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.804734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.804964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.805191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.805220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.805442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.805641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.805667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.805863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.806095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.806125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.806303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.806508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.806535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.806776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.807007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.807037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.807237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.807464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.807490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.807645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.807825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.807852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.808020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.808200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.808227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.808472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.808680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.808707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.808872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.809079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.809105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.809285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.809692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.809751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.809919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.810112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.810139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.810340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.810520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.810547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.810742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.810956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.810986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.811181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.811407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.811436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.811609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.811771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.811800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.812006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.812237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.812296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.812471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.812660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.812689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.812916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.813151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.813181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.813412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.813619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.813645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.813827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.814035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.814066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.814250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.814566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.814617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.814851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.815078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.815105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.815296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.815518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.815548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.815759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.815937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.815965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.145 [2024-07-14 04:02:22.816118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.816300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.145 [2024-07-14 04:02:22.816326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.145 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.816550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.816726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.816755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.816977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.817173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.817200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.817379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.817562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.817589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.817771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.817953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.817980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.818185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.818548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.818600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.818795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.818980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.819009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.819208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.819470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.819522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.819716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.819890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.819918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.820124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.820306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.820350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.820573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.820741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.820770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.820935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.821104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.821130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.821277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.821433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.821460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.821617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.821802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.821833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.822000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.822177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.822203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.822380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.822527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.822554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.822726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.822892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.822920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.823067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.823249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.823275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.823421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.823627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.823654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.823835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.824038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.824068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.824240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.824431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.824460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.824657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.824847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.824885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.825086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.825287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.825317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.825521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.825680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.825710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.825891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.826048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.826075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.826226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.826381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.826407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.826580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.826769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.826798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.827023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.827171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.827198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.827356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.827540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.827566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.827752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.827909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.827937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.146 [2024-07-14 04:02:22.828098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.828304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.146 [2024-07-14 04:02:22.828330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.146 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.828510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.828710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.828739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.828927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.829088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.829115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.829333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.829549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.829600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.829795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.830022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.830051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.830233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.830411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.830438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.830620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.830825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.830851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.831042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.831192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.831219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.831398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.831586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.831615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.831812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.832006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.832033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.832182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.832395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.832446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.832643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.832838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.832874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.833051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.833239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.833265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.833448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.833626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.833653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.833855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.834045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.834075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.834313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.834561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.834607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.834839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.835058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.835085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.835284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.835486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.835512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.835657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.835834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.835859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.836049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.836233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.836261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.836483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.836662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.836689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.836839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.837020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.837047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.837207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.837388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.837414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.147 [2024-07-14 04:02:22.837620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.837792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.147 [2024-07-14 04:02:22.837818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.147 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.837991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.838206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.838236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.838459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.838640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.838667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.838833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.839007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.839035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.839196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.839370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.839396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.839562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.839784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.839813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.839986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.840166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.840193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.840402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.840610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.840657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.840827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.841044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.841071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.841257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.841435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.841461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.841610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.841796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.841822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.841984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.842132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.842179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.842362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.842539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.842566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.842766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.842916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.842943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.843121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.843348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.843378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.843576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.843776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.843806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.843977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.844131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.844158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.844374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.844650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.844703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.844906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.845088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.845115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.845285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.845505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.845532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.845764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.845940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.845970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.846151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.846400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.846448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.846677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.846857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.846891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.847115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.847343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.847392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.847615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.847820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.847846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.848062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.848256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.848303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.848510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.848686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.848712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.848889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.849071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.849098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.849325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.849541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.849587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.849776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.849929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.849957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.850140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.850338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.148 [2024-07-14 04:02:22.850371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.148 qpair failed and we were unable to recover it. 00:30:04.148 [2024-07-14 04:02:22.850607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.850756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.850799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.850994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.851188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.851218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.851425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.851681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.851727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.851973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.852177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.852203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.852414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.852590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.852617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.852826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.852998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.853025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.853229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.853493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.853540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.853782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.853995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.854022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.854246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.854458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.854484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.854690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.854875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.854902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.855063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.855299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.855345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.855547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.855779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.855828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.856037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.856212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.856241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.856465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.856684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.856713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.856939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.857129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.857158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.857351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.857509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.857536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.857733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.857932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.857962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.858163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.858355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.858385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.858586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.858773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.858802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.859027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.859262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.859292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.859523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.859749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.859778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.859973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.860191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.860237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.860438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.860633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.860663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.860883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.861070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.861097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.861276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.861448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.861474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.861628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.861775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.861801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.862004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.862158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.862185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.862361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.862613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.862660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.862896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.863099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.863126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.863306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.863485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.863528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.149 qpair failed and we were unable to recover it. 00:30:04.149 [2024-07-14 04:02:22.863709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.149 [2024-07-14 04:02:22.863886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.863913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.864091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.864294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.864320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.864501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.864675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.864702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.864890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.865089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.865118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.865343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.865493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.865520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.865701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.865905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.865932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.866106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.866313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.866339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.866517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.866675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.866703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.866884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.867073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.867099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.867300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.867506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.867553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.867754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.867954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.867984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.868184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.868367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.868394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.868597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.868795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.868824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.869037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.869222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.869248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.869439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.869611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.869641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.869863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.870061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.870091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.870300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.870489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.870515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.870681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.870886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.870913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.871124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.871323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.871351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.871523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.871720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.871748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.871974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.872194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.872245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.872475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.872630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.872657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.872801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.872985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.873012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.873217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.873410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.873439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.873635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.873816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.873842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.874057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.874256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.874287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.874469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.874721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.874765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.874926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.875139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.875176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.875362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.875581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.875614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.875883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.876088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.876117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.150 [2024-07-14 04:02:22.876314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.876495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.150 [2024-07-14 04:02:22.876524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.150 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.876714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.876976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.877003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.877233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.877457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.877504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.877764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.877983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.878010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.878162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.878344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.878373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.878599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.878800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.878829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.879008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.879216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.879257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.879454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.879670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.879700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.879883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.880111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.880138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.880318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.880484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.880532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.880760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.881013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.881044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.881196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.881403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.881432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.881656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.881854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.881892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.882092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.882275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.882302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.882557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.882797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.882826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.883037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.883186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.883212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.883364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.883546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.883572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.883746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.883921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.883948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.884128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.884308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.884334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.884566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.884762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.884792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.884977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.885165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.885191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.885448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.885862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.885946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.886134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.886343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.886389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.886647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.886878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.886905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.887114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.887288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.887315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.887495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.887820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.887876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.151 [2024-07-14 04:02:22.888112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.888318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.151 [2024-07-14 04:02:22.888347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.151 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.888571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.888799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.888828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.889057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.889260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.889286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.889673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.889937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.889963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.890159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.890350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.890377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.890566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.890745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.890771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.891025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.891210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.891236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.891409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.891587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.891613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.891799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.891950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.891985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.892135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.892354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.892382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.892581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.892783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.892809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.892993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.893178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.893204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.893378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.893565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.893591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.893792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.894019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.894049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.894271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.894446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.894475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.894722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.894947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.894976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.895153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.895354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.895381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.895519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.895688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.895717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.895947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.896093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.896120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.896301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.896507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.896533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.896707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.896911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.896938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.897115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.897330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.897358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.897526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.897748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.897777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.897956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.898112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.898159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.898390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.898637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.898666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.898881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.899084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.899113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.899282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.899549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.899614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.899827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.900006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.900036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.900258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.900624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.900675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.900902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.901124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.901153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.901358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.901562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.901588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.152 qpair failed and we were unable to recover it. 00:30:04.152 [2024-07-14 04:02:22.901735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.152 [2024-07-14 04:02:22.901935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.901965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.902163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.902346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.902372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.902575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.902802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.902831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.903009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.903188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.903217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.903442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.903685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.903718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.903917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.904096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.904123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.904326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.904527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.904585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.904765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.904970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.904997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.905176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.905465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.905524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.905747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.905903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.905929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.906086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.906242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.906269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.906448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.906647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.906676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.906907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.907108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.907137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.907318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.907523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.907549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.907744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.907906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.907936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.908163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.908334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.908363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.908552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.908742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.908771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.908972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.909154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.909180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.909360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.909535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.909577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.909798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.909997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.910025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.910190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.910340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.910366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.910549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.910689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.910715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.910885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.911042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.911068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.911223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.911426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.911452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.911616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.911791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.911817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.911973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.912153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.912180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.912417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.912641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.912667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.912896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.913062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.913093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.913349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.913589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.913615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.913800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.913981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.914009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.914159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.914314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.153 [2024-07-14 04:02:22.914341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.153 qpair failed and we were unable to recover it. 00:30:04.153 [2024-07-14 04:02:22.914519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.914759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.914788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.914963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.915187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.915217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.915451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.915631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.915659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.915807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.915967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.915994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.916193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.916470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.916517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.916742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.916942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.916972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.917156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.917313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.917340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.917520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.917686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.917715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.917919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.918072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.918099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.918273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.918481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.918510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.918738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.918919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.918948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.919135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.919326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.919353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.919508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.919687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.919713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.919920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.920089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.920118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.920285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.920463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.920493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.920650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.920840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.920873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.921102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.921278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.921309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.921515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.921691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.921717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.921922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.922101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.922127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.922274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.922424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.922450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.922686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.922876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.922905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.923126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.923321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.923350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.923551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.923724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.923750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.923897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.924074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.924100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.924319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.924500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.924525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.924684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.924844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.924907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.925096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.926017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.926048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.926266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.926531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.926583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.926790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.926989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.927019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.927190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.927343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.927370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.154 [2024-07-14 04:02:22.927572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.927752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.154 [2024-07-14 04:02:22.927778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.154 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.927942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.928126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.928150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.928366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.928543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.928568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.928750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.928929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.928954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.929108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.929288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.929313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.929472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.929620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.929645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.929821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.929967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.929992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.930189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.930389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.930414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.930596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.930807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.930831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.930998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.931177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.931201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.931378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.931527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.931551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.931705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.931885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.931911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.932091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.932263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.932287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.932459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.932666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.932690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.932902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.933115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.933140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.933345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.933492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.933516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.933664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.933844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.933875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.934077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.934297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.934325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.934550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.934735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.934763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.934957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.935130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.935157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.935334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.935536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.935560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.935775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.935962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.935990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.936149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.936329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.936361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.936711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.936951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.936979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.937180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.937357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.937381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.937559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.937763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.937787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.155 [2024-07-14 04:02:22.937983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.938273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.155 [2024-07-14 04:02:22.938322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.155 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.938670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.938885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.938914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.939141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.939319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.939343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.939500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.939684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.939709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.939885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.940061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.940086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.940268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.940450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.940495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.940746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.940920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.940946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.941108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.941310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.941334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.941544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.941761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.941788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.942016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.942234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.942266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.942494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.942752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.942805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.943010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.943163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.943187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.943377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.943601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.943646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.943806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.944025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.944054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.944260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.944415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.944439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.944640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.944821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.944848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.945054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.945333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.945384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.945725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.945959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.945989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.946197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.946414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.946460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.946630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.946822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.946849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.947074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.947260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.947305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.947508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.947687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.947711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.947875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.948100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.948127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.948335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.948614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.948669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.948834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.949030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.949059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.949265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.949457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.949482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.949711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.949884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.949912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.950116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.950406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.950456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.950652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.950830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.950858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.951067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.951299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.951350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.951555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.951759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.951784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-07-14 04:02:22.951984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-07-14 04:02:22.952205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.952233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.952430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.952587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.952614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.952814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.952983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.953009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.953171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.953397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.953425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.953618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.953810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.953838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.954016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.954313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.954364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.954654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.954837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.954873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.955074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.955226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.955251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.955429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.955660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.955706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.955915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.956090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.956117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.956289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.956546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.956573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.956767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.956978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.957004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.957208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.957483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.957533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.957727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.957984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.958012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.958209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.958534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.958588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.958832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.959027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.959052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.959283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.959560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.959587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.959782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.960005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.960033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.960233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.960445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.960473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.960744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.960993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.961021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.961220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.961369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.961394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.961547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.961729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.961753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.961931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.962129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.962158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.962463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.962746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.962798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.963025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.963247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.963275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.963474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.963731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.963793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.963989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.964221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.964283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.964498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.964719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.964744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.964957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.965177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.965223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.965430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.965602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.965634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.965810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.966034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-07-14 04:02:22.966062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-07-14 04:02:22.966283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.966482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.966509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.966740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.966941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.966966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.967199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.967519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.967583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.967807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.967974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.968002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.968228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.968489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.968516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.968749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.968930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.968959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.969179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.969452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.969479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.969689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.969845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.969876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.970037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.970233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.970258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.970458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.970688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.970715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.970931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.971110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.971134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.971315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.971536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.971563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.971793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.971990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.972018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.972239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.972559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.972607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.972826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.973058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.973086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.973402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.973804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.973861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.974074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.974303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.974330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.974553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.974744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.974771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.974993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.975169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.975196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.975424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.975717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.975764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.976007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.976244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.976295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.976524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.976706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.976731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.976986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.977162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.977187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.977371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.977593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.977620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.977849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.978020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.978047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.978249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.978479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.978504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.978735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.978961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.978989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.979159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.979357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.979384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.979597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.979792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.979817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.980000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.980180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.980227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-07-14 04:02:22.980458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.980655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-07-14 04:02:22.980702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.980912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.981134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.981161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.981348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.981509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.981578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.981765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.981960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.981988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.982183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.982406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.982431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.982654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.982824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.982851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.983061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.983215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.983240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.983445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.983701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.983745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.983951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.984134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.984158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.984359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.984682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.984736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.984914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.985096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.985136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.985336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.985551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.985579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.985774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.985995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.986023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.986559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.986785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.986814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.986992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.987196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.987224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.987431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.987651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.987699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.987936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.988116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.988141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.988347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.988582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.988632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.988859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.989025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.989050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.989242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.989478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.989525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.989741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.989948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.989976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.990207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.990456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.990501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.990703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.990900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.990928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.991148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.991398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.991449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.991642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.991838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.991873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.992079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.992253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.992277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.992455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.992630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.992654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.992833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.993054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.993082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.993250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.993465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.993510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.993716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.993896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.993924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.994127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.994345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.994372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-07-14 04:02:22.994590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-07-14 04:02:22.994791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.994817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.995015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.995219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.995246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.995469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.995683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.995706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.995912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.996124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.996152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.996359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.996534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.996558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.996734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.996918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.996944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.997129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.997286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.997310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.997464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.997639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.997664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.997847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.998024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.998049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.998233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.998388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.998412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.998567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.998724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.998748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.998922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.999100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.999125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.999333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.999543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.999568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:22.999767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:22.999979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.000005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.000188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.000363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.000388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.000573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.000749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.000773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.000920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.001102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.001126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.001287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.001457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.001481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.001661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.001841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.001871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.002057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.002213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.002237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.002447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.002644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.002668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.002851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.003075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.003100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.003257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.003434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.003459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.003664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.003851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.003889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.004067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.004221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.004246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-07-14 04:02:23.004404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-07-14 04:02:23.004588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.004612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.004795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.004999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.005024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.005181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.005356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.005380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.005565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.005723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.005753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.005940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.006116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.006154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.006334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.006492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.006516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.006721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.006896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.006921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.007071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.007260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.007285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.007468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.007651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.007676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.007833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.008056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.008081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.008236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.008449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.008474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.008660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.008834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.008858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.009057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.009215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.009240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.009439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.009584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.009608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.009762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.009966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.009994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.010153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.010354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.010378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.010556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.010731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.010755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.010926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.011100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.011124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.011328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.011508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.011534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.011739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.011921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.011946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.012126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.012301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.012325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.012539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.012719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.012745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.012927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.013088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.013112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.013291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.013493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.013517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.013675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.013822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.013846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.014042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.014221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.014246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.014422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.014589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.014614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.014758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.014939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.014965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.015152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.015305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.015329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.015509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.015684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.015708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.015890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.016074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-07-14 04:02:23.016099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-07-14 04:02:23.016272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.016449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.016474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.016656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.016826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.016851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.017037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.017238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.017263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.017444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.017662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.017686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.017888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.018074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.018099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.018257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.018433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.018457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.018631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.018836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.018860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.019058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.019211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.019235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.019414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.019614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.019639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.019792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.019968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.019997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.020180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.020359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.020383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.020534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.020734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.020758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.020943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.021121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.021146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.021299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.021450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.021476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.021679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.021862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.021892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.022102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.022276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.022301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.022494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.022671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.022695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.022849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.023043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.023068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.023243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.023416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.023440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.023587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.023737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.023762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.023966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.024123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.024148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.024332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.024535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.024560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.024714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.024919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.024945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.025148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.025293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.025317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.025485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.025638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.025672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.025852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.026127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.026152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.026309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.026484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.026508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.026709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.026890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.026926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.027077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.027257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.027283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.027460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.027643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.027667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-07-14 04:02:23.027846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.028036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-07-14 04:02:23.028061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.028244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.028424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.028448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.028632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.028809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.028834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.029007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.029160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.029185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.029391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.029542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.029566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.029747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.029949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.029974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.030154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.030335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.030359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.030501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.030654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.030678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.030886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.031064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.031088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.031296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.031477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.031501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.031650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.031825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.031849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.032043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.032244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.032269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.032447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.032598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.032623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.032783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.032962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.032987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.033146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.033320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.033345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.033547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.033755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.033786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.034009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.034167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.034194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.034352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.034566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.034593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.034781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.034992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.035021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.035205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.035389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.035416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.035605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.035800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.035829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.036028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.036216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.036244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.036433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.036616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.036643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.036830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.037010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.037038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.037229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.037437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.037465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.037631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.037846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.037883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.038084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.038274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.038301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.038489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.038666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.038694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.038856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.039050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.039079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.039267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.039432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.039459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.039677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.042879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-07-14 04:02:23.042910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-07-14 04:02:23.043141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.043354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.043382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.043567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.043777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.043805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.044018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.044209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.044237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.044453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.044622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.044648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.044832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.045031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.045058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.045239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.045426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.045454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.045641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.045830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.045857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.046049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.046231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.046259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.046416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.046629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.046656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.046840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.047060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.047088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.047249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.047440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.047468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.047633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.047793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.047820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.048046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.048232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.048260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.048443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.048730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.048757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.048960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.049159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.049186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.049402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.049590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.049617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.049804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.050001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.050028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.050221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.050440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.050469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.050685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.050899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.050926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.052892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.053160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.053189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.053356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.053569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.053597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.053795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.054060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.054088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.054277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.054468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.054496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.054684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.054896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.054924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.055105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.055301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.055329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-07-14 04:02:23.055549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-07-14 04:02:23.055710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.055737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.055922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.056137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.056165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.056358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.056543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.056570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.056807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.056963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.056993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.057181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.057372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.057399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.057609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.057827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.057854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.058070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.061891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.061922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.062189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.062371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.062413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.062621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.062813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.062839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.062991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.063207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.063233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.063410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.063590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.063617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.063802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.063973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.064000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.064208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.064361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.064403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.064628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.064845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.064878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.065070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.065279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.065307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.065536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.065709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.065758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.065979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.066131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.066159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.066345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.066554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.066580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.066840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.067015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.067041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.067305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.067524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.067555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.067742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.067931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.067958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.068142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.068350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.068376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.068561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.068718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-14 04:02:23.068745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-07-14 04:02:23.068901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.069115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.069142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.069364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.069558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.069584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.069748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.069970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.069997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.070201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.070407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.070432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.070618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.070807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.070834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.071030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.071183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.071208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.071364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.071517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.071549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.071720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.071913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.071939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.072117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.072301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.072328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.075879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.076168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.076204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.076436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.076650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.076700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.076923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.077129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.077161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.077385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.077625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.077657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.077862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.078049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.078082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.078308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.078472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.078505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.078699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.078871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.078907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.079105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.079402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.079438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.079651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.079895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.079927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.080124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.080315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.080361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.080582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.080750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.080785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.081076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.081278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.081316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.081576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.081800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.081833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.082043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.082289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.082323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.082519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.082736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.082768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.082942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.083189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.083223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.083439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.083638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.083684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.083859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.084065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.084104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.084330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.084515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.084547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.084714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.084960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.084995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.085188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.085395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.085423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.085613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.085859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.085903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.086094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.086290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.086336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.086623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.086852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.086896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.087101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.087334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.087368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.087563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.087784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.087821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.088010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.088212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.088249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.088483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.088718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.088756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.088999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.089174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.089208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.089402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.089622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.089652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.089878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.090095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.090128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.090327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.090525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.090560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.090733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.090934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.090968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.091170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.091390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.091423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.091595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.091806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.091839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.092048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.092266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.092299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.092476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.092670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.092702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.092900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.093099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.093133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.093313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.093543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.093576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.093773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.093970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.094003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.094181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.094365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.094395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.094585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.094809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.094840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.095066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.095285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.095318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.095489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.095683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.095717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.452 qpair failed and we were unable to recover it. 00:30:04.452 [2024-07-14 04:02:23.095942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.096115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.452 [2024-07-14 04:02:23.096148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.096316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.096521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.096557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.096759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.096954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.096987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.097186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.097378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.097410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.097612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.097785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.097817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.098024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.098217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.098263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.098465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.098658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.098691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.098893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.099098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.099132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.099332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.099525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.099557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.099763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.099928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.099959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.100178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.100341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.100377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.100621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.100792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.100825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.101087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.101279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.101312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.101510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.101713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.101746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.101941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.102157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.102189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.102377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.102575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.102607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.102809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.103023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.103059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.103265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.103488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.103523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.103762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.103961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.103996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.104172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.104368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.104403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.104601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.104774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.104806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.104999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.105206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.105238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.105430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.105634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.105667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.105915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.106083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.106119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.106353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.106528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.106562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.106792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.106984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.107019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.107250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.107453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.107486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.107685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.107881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.107924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.108118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.108348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.108383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.108612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.108815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.108851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.109105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.109295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.109329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.109528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.109727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.109764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.109974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.110166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.110198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.110420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.110584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.110615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.110847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.111034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.111070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.111274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.111471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.111505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.111733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.111910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.111945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.112164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.112362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.112397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.112597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.112793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.112828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9544000b90 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.113086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.113276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.113306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.113518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.113698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.113724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.113917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.114093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.114118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.114291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.114463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.114487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.114701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.114882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.114916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.115100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.115268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.115292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.115465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.115614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.115638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.115838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.116003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.116028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.116241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.116444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.116468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.116646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.116817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.116842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.117012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.117191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.117215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.117396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.117568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.117592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.117773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.117943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.117969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.118123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.118331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.118356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.118536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.118709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.118733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.118944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.119126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.119151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.119364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.119568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.453 [2024-07-14 04:02:23.119592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.453 qpair failed and we were unable to recover it. 00:30:04.453 [2024-07-14 04:02:23.119747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.119905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.119930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.120133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.120289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.120313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.120516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.120661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.120686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.120863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.121024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.121048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.121229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.121372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.121396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.121576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.121778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.121803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.122000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.122204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.122229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.122385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.122538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.122562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.122740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.122895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.122934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.123098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.123249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.123274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.123448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.123652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.123676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.123878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.124034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.124058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.124229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.124404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.124428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.124605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.124781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.124806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.124983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.125183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.125207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.125380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.125553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.125578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.125754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.125909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.125934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.126140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.126356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.126380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.126560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.126759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.126783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.126990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.127137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.127161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.127333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.127504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.127529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.127698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.127857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.127891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.128074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.128277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.128302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.128476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.128655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.128679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.128851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.129073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.129097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.129276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.129475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.129499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.129675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.129855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.129884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.130074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.130250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.130274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.130478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.130675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.130699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.130881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.131060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.131084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.131286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.131459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.131483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.131658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.131835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.131860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.132050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.132226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.132251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.132401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.132555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.132579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.132784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.132971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.132996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.133150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.133322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.133346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.133493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.133671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.133695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.133854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.134087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.134112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.134261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.134464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.134488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.134655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.134807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.134832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.135027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.135211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.135235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.135442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.135625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.135649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.135829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.135983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.136013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.136190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.136362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.136387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.136591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.136765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.136789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.136945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.137127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.137151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.137337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.137511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.137535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.137679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.137828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.137853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.138013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.138197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.138221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.138396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.138600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.138629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.138804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.139007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.139032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.454 [2024-07-14 04:02:23.139205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.139406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.454 [2024-07-14 04:02:23.139430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.454 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.139602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.139774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.139798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.139981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.140169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.140193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.140342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.140523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.140547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.140749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.140918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.140943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.141105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.141285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.141310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.141484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.141695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.141719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.141900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.142073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.142098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.142278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.142458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.142486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.142669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.142852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.142882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.143083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.143286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.143311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.143511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.143660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.143684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.143857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.144042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.144067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.144249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.144451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.144475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.144631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.144831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.144855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.145065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.145257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.145281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.145429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.145625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.145649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.145798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.145978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.146003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.146187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.146389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.146414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.146612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.146767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.146792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.146969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.147145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.147169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.147344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.147514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.147538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.147687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.147859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.147889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.148080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.148283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.148307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.148457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.148637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.148661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.148837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.149017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.149041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.149221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.149427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.149451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.149631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.149809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.149833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.150042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.150217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.150241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.150396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.150604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.150628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.150783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.150940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.150965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.151148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.151302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.151326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.151503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.151650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.151675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.151849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.152060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.152085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.152264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.152440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.152464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.152663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.152842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.152873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.153067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.153218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.153242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.153420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.153566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.153591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.153770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.153926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.153951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.154125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.154349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.154373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.154543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.154720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.154745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.154953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.155108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.155132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.155307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.155478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.155502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.155687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.155861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.155891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.156069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.156244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.156268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.156446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.156627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.156651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.156857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.157023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.157048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.157247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.157427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.157453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.157633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.157807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.157831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.158005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.158153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.158182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.158388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.158564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.158588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.158774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.158958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.158983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.159163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.159306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.159330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.159517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.159692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.159717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.159862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.160057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.160082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.160233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.160406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.160430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.160601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.160803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.160828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.455 qpair failed and we were unable to recover it. 00:30:04.455 [2024-07-14 04:02:23.161038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.455 [2024-07-14 04:02:23.161213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.161238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.161419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.161586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.161610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.161778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.161958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.161983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.162185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.162360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.162384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.162536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.162712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.162736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.162912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.163115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.163139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.163314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.163493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.163517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.163695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.163899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.163924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.164100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.164256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.164285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.164440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.164622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.164647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.164851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.165032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.165057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.165229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.165403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.165427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.165610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.165786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.165810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.165971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.166119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.166144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.166343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.166519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.166543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.166744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.166922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.166947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.167126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.167297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.167321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.167521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.167666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.167690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.167882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.168060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.168085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.168243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.168417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.168441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.168622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.168767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.168791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.168977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.169190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.169215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.169366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.169520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.169544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.169752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.169901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.169926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.170106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.170280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.170304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.170482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.170685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.170709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.170891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.171095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.171119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.171264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.171448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.171473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.171676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.171851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.171881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.172088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.172228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.172252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.172431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.172633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.172657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.172862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.173021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.173045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.173203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.173383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.173409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.173616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.173820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.173845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.174005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.174186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.174210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.174389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.174560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.174584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.174758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.174937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.174962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.175132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.175322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.175346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.175525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.175732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.175756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.175961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.176113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.176138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.176323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.176524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.176548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.176728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.176884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.176909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.177086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.177267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.177293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.177468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.177672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.177701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.177881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.178088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.178112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.178296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.178476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.178499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.178680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.178855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.178887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.179035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.179180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.179204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.179355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.179533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.179557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.179707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.179862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.179905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.180114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.180293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.180318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.180504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.180682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.456 [2024-07-14 04:02:23.180706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.456 qpair failed and we were unable to recover it. 00:30:04.456 [2024-07-14 04:02:23.180888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.181070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.181094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.181279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.181483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.181507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.181772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.181927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.181952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.182134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.182313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.182337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.182541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.182695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.182720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.182900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.183083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.183107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.183316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.183515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.183540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.183744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.183918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.183944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.184126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.184295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.184319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.184510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.184718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.184742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.184921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.185093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.185118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.185294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.185476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.185500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.185707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.185884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.185909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.186065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.186241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.186265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.186450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.186600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.186624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.186780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.186957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.186984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.187133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.187337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.187362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.187568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.187746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.187770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.187919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.188124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.188148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.188329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.188503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.188527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.188700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.188878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.188903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.189059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.189261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.189286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.189465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.189676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.189700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.189902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.190088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.190112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.190288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.190478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.190502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.190656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.190859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.190889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.191045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.191198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.191222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.191398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.191598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.191622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.191804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.191985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.192010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.192164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.192314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.192340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.192532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.192682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.192707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.192910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.193096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.193120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.193276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.193450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.193478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.193652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.193831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.193855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.194043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.194227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.194251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.194425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.194602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.194627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.194808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.194984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.195010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.195188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.195400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.195424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.195627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.195782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.195806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.195966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.196171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.196195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.196384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.196532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.196556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.196736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.196886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.196911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.197104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.197274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.197302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.197482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.197635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.197659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.197851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.198058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.198083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.198237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.198413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.198437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.198617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.198812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.198836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.198984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.199135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.199159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.199313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.199513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.199537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.199715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.199918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.199942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.200145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.200318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.200342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.200519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.200668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.200692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.200898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.201078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.201103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.201288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.201463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.201487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.457 [2024-07-14 04:02:23.201671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.201825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.457 [2024-07-14 04:02:23.201849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.457 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.202035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.202213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.202237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.202416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.202592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.202616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.202808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.202999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.203024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.203205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.203380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.203404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.203574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.203750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.203773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.203960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.204153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.204177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.204379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.204583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.204606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.204791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.204995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.205020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.205228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.205406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.205430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.205631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.205808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.205832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.206031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.206211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.206235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.206442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.206634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.206658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.206862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.207021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.207045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.207205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.207381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.207406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.207607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.207787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.207811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.207969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.208149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.208173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.208347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.208492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.208516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.208664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.208888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.208913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.209063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.209270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.209294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.209498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.209669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.209693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.209846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.210030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.210054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.210233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.210410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.210434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.210611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.210787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.210811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.211014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.211215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.211240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.211421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.211628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.211652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.211853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.212039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.212063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.212247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.212427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.212451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.212607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.212786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.212811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.213012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.213166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.213194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.213381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.213589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.213614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.213789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.213993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.214018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.214200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.214342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.214366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.214520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.214690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.214714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.214918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.215096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.215120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.215293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.215467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.215491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.215643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.215840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.215870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.216048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.216223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.216247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.216399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.216553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.216577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.216761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.216915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.216940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.217119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.217266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.217291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.217476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.217650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.217674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.217853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.218041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.218065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.218249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.218426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.218450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.218608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.218817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.218841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.218998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.219150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.219174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.219326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.219511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.219535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.219747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.219930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.219955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.220133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.220313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.220337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.220542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.220717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.220742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.220953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.221139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.221164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.221346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.221530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.221554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.221713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.221891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.221916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.222096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.222277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.222301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.222453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.222658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.222682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.222890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.223095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.223119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.223297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.223503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.223527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.223700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.223850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.458 [2024-07-14 04:02:23.223883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.458 qpair failed and we were unable to recover it. 00:30:04.458 [2024-07-14 04:02:23.224093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.224299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.224324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.224501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.224672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.224697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.224888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.225069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.225094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.225272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.225448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.225472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.225652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.225833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.225857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.226043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.226206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.226231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.226437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.226615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.226641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.226839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.227051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.227076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.227233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.227406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.227430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.227602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.227779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.227803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.227999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.228171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.228194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.228351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.228502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.228526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.228704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.228862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.228893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.229077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.229258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.229282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.229483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.229659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.229683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.229855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.230039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.230063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.230244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.230398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.230422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.230567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.230722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.230746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.230917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.231078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.231102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.231280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.231485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.231510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.231685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.231860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.231889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.232094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.232247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.232273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.232451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.232625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.232653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.232821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.233005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.233030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.233235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.233442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.233466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.233649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.233829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.233854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.234007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.234162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.234186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.234363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.234544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.234568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.234720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.234924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.234949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.235125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.235300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.235324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.235499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.235700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.235724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.235881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.236057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.236081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.236261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.236428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.236452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.236661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.236808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.236832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.236999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.237178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.237202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.237383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.237557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.237581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.237772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.237976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.238001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.238177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.238355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.238379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.238580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.238724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.238749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.238926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.239080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.239105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.239290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.239464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.239489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.239696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.239877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.239902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.240053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.240235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.240259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.240447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.240601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.240625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.240803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.240973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.240997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.241173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.241320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.241344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.241523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.241699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.241723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.241924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.242102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.242127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.242280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.242483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.242507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.459 [2024-07-14 04:02:23.242685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.242861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.459 [2024-07-14 04:02:23.242893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.459 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.243067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.243247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.243271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.243473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.243680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.243704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.243878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.244052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.244076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.244228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.244384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.244408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.244586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.244760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.244784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.244965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.245111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.245135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.245290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.245491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.245515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.245722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.245901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.245926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.246115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.246295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.246319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.246535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.246710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.246733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.246939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.247113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.247137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.247297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.247475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.247499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.247680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.247880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.247905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.248111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.248258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.248287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.248468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.248616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.248640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.248841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.248998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.249023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.249202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.249351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.249375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.249554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.249728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.249752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.249958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.250114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.250139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.250295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.250468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.250492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.250695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.250876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.250901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.251082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.251256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.251280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.251459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.251638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.251662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.251837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.252061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.252089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.252246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.252391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.252415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.252596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.252767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.252791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.252995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.253146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.253170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.253374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.253555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.253579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.253738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.253918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.253943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.254120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.254292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.254316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.254488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.254692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.254716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.254898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.255098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.255123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.255303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.255481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.255505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.255654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.255793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.255818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.255984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.256158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.256182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.256365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.256569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.256593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.256770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.256971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.256996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.257170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.257345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.257369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.257526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.257709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.257733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.257937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.258096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.258120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.258264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.258419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.258443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.258622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.258826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.258850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.259039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.259220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.259245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.259427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.259604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.259628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.259806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.260018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.260043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.260218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.260421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.260445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.260601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.260773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.260797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.260951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.261160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.261185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.261365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.261541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.261565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.261768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.261947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.261971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.262130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.262335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.262359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.262562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.262735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.262758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.262907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.263080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.263104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.263287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.263493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.263517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.263691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.263864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.263894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.264076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.264250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.264274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.460 [2024-07-14 04:02:23.264450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.264628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.460 [2024-07-14 04:02:23.264655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.460 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.264856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.265043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.265067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.265259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.265429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.265453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.265650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.265807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.265831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.265990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.266163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.266188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.266366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.266543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.266567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.266719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.266906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.266931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.267084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.267261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.267285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.267440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.267619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.267647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.267823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.268019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.268044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.268197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.268372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.268396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.268581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.268762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.268787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.268970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.269148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.269173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.269376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.269556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.269580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.269733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.269933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.269958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.270137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.270334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.270358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.270534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.270691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.270715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.270895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.271106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.271130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.271335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.271537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.271561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.271721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.271898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.271922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.272078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.272254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.272278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.272453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.272649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.272673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.272828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.272983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.273008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.273196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.273375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.273399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.273592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.273800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.273825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.274009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.274193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.274217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.274371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.274543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.274567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.274748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.274903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.274929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.275086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.275269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.275293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.275480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.275635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.275659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.275844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.276062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.276087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.276288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.276489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.276513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.276720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.276877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.276904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.277061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.277268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.277292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.277469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.277680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.277705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.277873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.278033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.278057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.278234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.278407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.278431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.278579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.278783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.278807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.278967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.279172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.279197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.279383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.279586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.279610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.279792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.279993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.280018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.280203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.280384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.280410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.280588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.280761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.280785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.280995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.281209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.281234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.281434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.281632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.281656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.281810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.281967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.281992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.282169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.282346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.282370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.282553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.282758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.282783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.282967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.283113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.283137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.283346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.283528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.283552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.283758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.283935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.283960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.284134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.284305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-07-14 04:02:23.284329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-07-14 04:02:23.284536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.284712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.284737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.284942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.285089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.285114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.285294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.285468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.285492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.285674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.285851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.285882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.286085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.286261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.286285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.286462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.286659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.286683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.286891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.287050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.287074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.287248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.287427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.287455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.287616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.287787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.287811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.288014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.288169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.288193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.288374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.288549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.288573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.288751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.288935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.288959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.289147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.289325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.289349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.289557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.289733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.289756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.289943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.290091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.290116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.290265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.290442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.290466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.290652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.290856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.290889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.291064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.291239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.291263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.291469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.291621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.291645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.291823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.292000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.292025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.292205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.292386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.292410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.292591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.292761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.292785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.292989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.293132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.293156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.293361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.293530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.293554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.293736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.293933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.293958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.294107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.294294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.294318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.294499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.294672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.294696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.294900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.295069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.295093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.295285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.295436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.295460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.295625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.295804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.295828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.295996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.296195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.296219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.296375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.296549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.296574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.296712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.296886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.296911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.297086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.297283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.297307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.297478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.297663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.297687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.297861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.298051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.298075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.298252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.298451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.298476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.298687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.298859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.298901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.299081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.299264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.299288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.299493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.299698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.299722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.299883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.300063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.300087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.300289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.300439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.300465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.300670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.300881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.300907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.301084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.301287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.301312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.301493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.301660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.301684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.301858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.302069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.302094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.302276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.302477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.302501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.302705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.302882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.302907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.303056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.303235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.303264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.303467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.303675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.303699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.303904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.304087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.304112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.304267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.304448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.304472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.304677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.304825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.304850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.305012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.305192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.305217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.305366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.305511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.305535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-07-14 04:02:23.305706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-07-14 04:02:23.305881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.305906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.306058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.306230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.306254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.306460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.306615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.306641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.306846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.307018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.307047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.307195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.307394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.307418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.307571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.307751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.307776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.307929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.308092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.308116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.308273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.308448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.308473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.308686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.308873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.308898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.309079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.309226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.309250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.309454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.309632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.309658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.309834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.310049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.310075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.310255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.310446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.310471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.310627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.310831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.310855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.311048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.311224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.311248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.311396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.311580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.311604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.311780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.311955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.311980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.312127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.312295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.312320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.312519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.312717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.312741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.312932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.313089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.313114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.313265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.313434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.313458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.313632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.313805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.313828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.313990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.314166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.314190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.314369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.314509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.314534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.314717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.314894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.314919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.315097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.315278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.315303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.315480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.315682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.315707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.315887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.316027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.316052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.316212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.316413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.316437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.316620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.316796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.316820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.316993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.317198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.317223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.317408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.317580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.317604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.317784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.317938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.317963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.318143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.318344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.318368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.318546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.318726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.318751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.318941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.319118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.319142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.319326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.319507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.319531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.319679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.319880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.319905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.320079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.320250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.320274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.320480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.320649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.320673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.320856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.321040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.321065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.321273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.321449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.321473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.321649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.321818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.321843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.322010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.322185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.322209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.322408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.322578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.322606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.322791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.322942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.322968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.323119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.323266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.323290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.323465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.323639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.323663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.323855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.324065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.324090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.324249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.324403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.324427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.324605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.324807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.324831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.325017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.325220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.325244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-07-14 04:02:23.325398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.325598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-07-14 04:02:23.325622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.325801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.325949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.325974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.326134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.326310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.326334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.326545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.326696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.326720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.326900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.327048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.327072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.327276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.327452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.327476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.327625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.327827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.327851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.328044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.328258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.328282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.328432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.328604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.328628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.328810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.328997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.329022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.329171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.329344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.329368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.329550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.329702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.329726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.329930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.330083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.330108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.330313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.330486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.330510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.330671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.330890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.330916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.331076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.331254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.331278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.331454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.331621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.331645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.331795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.331975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.332000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.332151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.332352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.332376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.332542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.332742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.332766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.332969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.333121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.333146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.333318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.333496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.333521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.333696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.333840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.333872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.334082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.334231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.334255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.334436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.334592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.334617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.334775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.334951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.334977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.335180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.335356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.335380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.335533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.335735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.335758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.335908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.336082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.336107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.336314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.336469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.336495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.336670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.336818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.336842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.337054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.337259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.337284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.337461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.337605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.337629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.337832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.338011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.338036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.338191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.338344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.338368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.338517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.338697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.338721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.338900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.339102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.339126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.339305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.339482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.339506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.339687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.339863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.339894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.340094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.340269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.340293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.340492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.340671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.340695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.340877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.341057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.341081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.341265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.341463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.341487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.341664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.341841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.341875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.342056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.342215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.342239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.342419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.342573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.342597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.342802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.343007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.343033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.343189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.343367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.343393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.343602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.343755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.343780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.343929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.344083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.344107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.344313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.344491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.344515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.344726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.344881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.344907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.345069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.345271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.345295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.345445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.345619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.345644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.345830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.346003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.346029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.346205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.346381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.346406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.346555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.346756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.346781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-07-14 04:02:23.346963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.347116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-07-14 04:02:23.347141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.347345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.347493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.347517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.347672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.347852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.347884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.348037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.348238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.348263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.348442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.348623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.348648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.348796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.348968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.348994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.349165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.349345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.349370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.349560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.349709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.349733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.349915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.350122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.350146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.350297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.350470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.350494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.350672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.350878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.350903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.351052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.351229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.351254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.351426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.351608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.351632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.351777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.351951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.351977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.352136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.352316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.352340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.352525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.352705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.352729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.352908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.353082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.353107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.353285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.353468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.353492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.353672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.353878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.353904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.354053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.354201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.354225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.354402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.354555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.354581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.354764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.354956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.354981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.355134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.355320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.355346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.355531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.355710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.355734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.355937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.356106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.356131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.356289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.356468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.356492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.356650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.356824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.356848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.357038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.357219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.357248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.357432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.357609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.357633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.357785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.357936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.357963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.358146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.358320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.358344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.358524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.358700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.358724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.358904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.359077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.359102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.359253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.359426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.359451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.359605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.359814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.359838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.359998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.360183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.360207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.360385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.360558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.360582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.360757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.360936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.360965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.361146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.361322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.361347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.361502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.361713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.361737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.361894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.362073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.362098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.362284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.362464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.362490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.362673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.362847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.362878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.363068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.363241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.363266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.363441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.363645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.363669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.363878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.364035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.364059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.364249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.364451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.364476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.364681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.364858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.364891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.365078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.365279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.365304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.365480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.365633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.365657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.365834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.366046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.366071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.366252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.366405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.366429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.366634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.366780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.366804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.366965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.367115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.367140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.367317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.367520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.367545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.367715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.367873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.367911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.368102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.368257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.368281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-07-14 04:02:23.368497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.368679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-07-14 04:02:23.368703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.368894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.369099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.369123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.369301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.369504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.369529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.369711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.369891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.369916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.370118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.370317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.370341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.370488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.370662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.370686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.370863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.371053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.371077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.371279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.371422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.371446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.371622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.371820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.371845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.372034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.372210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.372234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.372385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.372589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.372612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.372793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.372975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.373001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.373182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.373354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.373378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.373559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.373740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.373764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.373944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.374130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.374154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.374329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.374505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.374529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.374707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.374889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.374916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.375098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.375283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.375307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.375468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.375655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.375679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.375861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.376017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.376043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.376202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.376376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.376400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.376555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.376732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.376762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.376926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.377103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.377127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-07-14 04:02:23.377273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.377429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-07-14 04:02:23.377453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.737 [2024-07-14 04:02:23.377628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.377799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.377825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.737 qpair failed and we were unable to recover it. 00:30:04.737 [2024-07-14 04:02:23.378012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.378222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.378247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.737 qpair failed and we were unable to recover it. 00:30:04.737 [2024-07-14 04:02:23.378420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.378590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.378614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.737 qpair failed and we were unable to recover it. 00:30:04.737 [2024-07-14 04:02:23.378778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.378958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.378984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.737 qpair failed and we were unable to recover it. 00:30:04.737 [2024-07-14 04:02:23.379162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.379369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.737 [2024-07-14 04:02:23.379395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.737 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.379614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.379797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.379823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.380012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.380192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.380218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.380394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.380575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.380600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.380758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.380957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.380982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.381160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.381347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.381372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.381573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.381728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.381754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.381944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.382130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.382154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.382307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.382458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.382484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.382668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.382876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.382901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.383066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.383275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.383300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.383481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.383652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.383676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.383854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.384039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.384064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.384221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.384397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.384421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.384586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.384760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.384785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.384964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.385163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.385188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.385366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.385565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.385589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.385765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.385922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.385949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.386104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.386283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.386307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.386492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.386672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.386696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.386849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.387008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.387033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.387187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.387367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.387392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.387544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.387718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.387742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.387927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.388098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.388123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.388303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.388507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.388531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.388686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.388873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.388898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.389083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.389264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.389288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.389466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.389639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.389663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.389843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.390024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.390049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.390243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.390445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.390470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.390663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.390852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.390890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.391098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.391275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.391300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.391455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.391598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.391622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.391802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.392006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.392032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.392239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.392444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.392469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.392641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.392818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.392842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.393033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.393174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.393201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.393375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.393519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.393543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.393725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.393878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.393903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.738 qpair failed and we were unable to recover it. 00:30:04.738 [2024-07-14 04:02:23.394086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.394259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.738 [2024-07-14 04:02:23.394283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.394444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.394587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.394611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.394765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.394944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.394969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.395125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.395296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.395320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.395505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.395652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.395676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.395824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.395976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.396008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.396163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.396345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.396368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.396549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.396748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.396773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.396934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.397138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.397162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.397335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.397533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.397558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.397701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.397843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.397873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.398049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.398222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.398246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.398396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.398544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.398568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.398774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.398978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.399003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.399184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.399336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.399360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.399503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.399678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.399702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.399901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.400081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.400106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.400262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.400417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.400443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.400645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.400793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.400817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.401027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.401202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.401227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.401407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.401577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.401601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.401775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.401954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.401979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.402162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.402343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.402367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.402543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.402733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.402757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.402940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.403091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.403115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.403266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.403439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.403463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.403645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.403820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.403844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.404050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.404207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.404231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.404406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.404585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.404609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.404787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.404990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.405015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.405193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.405370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.405394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.405542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.405744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.405769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.405947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.406117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.406141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.406318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.406517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.406542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.406745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.406928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.406965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.739 qpair failed and we were unable to recover it. 00:30:04.739 [2024-07-14 04:02:23.407142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.407348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.739 [2024-07-14 04:02:23.407381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.407566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.407734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.407759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.407939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.408097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.408121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.408299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.408478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.408502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.408681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.408832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.408858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.409069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.409224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.409248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.409400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.409556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.409581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.409767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.409917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.409942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.410125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.410274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.410298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.410473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.410658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.410682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.410871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.411050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.411074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.411225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.411402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.411431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.411611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.411766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.411790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.412022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.412202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.412227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.412398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.412571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.412595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.412798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.412954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.412979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.413182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.413361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.413385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.413554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.413757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.413781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.413963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.414144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.414169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.414370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.414550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.414574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.414745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.414951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.414976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.415130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.415331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.415356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.415514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.415661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.415684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.415841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.416033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.416058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.416247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.416427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.416451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.416605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.416756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.416780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.416930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.417113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.417137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.417298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.417498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.417522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.417702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.417903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.417928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.418110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.418271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.418296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.418479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.418684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.418709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.740 qpair failed and we were unable to recover it. 00:30:04.740 [2024-07-14 04:02:23.418873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.740 [2024-07-14 04:02:23.419010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.419034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.419222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.419368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.419392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.419572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.419771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.419795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.419974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.420176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.420200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.420375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.420554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.420579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.420754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.420931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.420956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.421156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.421331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.421355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.421530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.421701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.421725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.421904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.422096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.422120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.422303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.422501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.422525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.422683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.422863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.422895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.423075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.423226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.423250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.423430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.423607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.423631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.423836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.424015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.424040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.424218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.424395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.424420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.424620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.424765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.424790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.424993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.425166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.425191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.425370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.425541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.425565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.425769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.425920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.425945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.426097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.426281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.426306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.426484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.426684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.426709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.426887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.427071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.427096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.427270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.427471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.427495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.427671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.427854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.427890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.428060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.428238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.428262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.741 qpair failed and we were unable to recover it. 00:30:04.741 [2024-07-14 04:02:23.428441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.741 [2024-07-14 04:02:23.428643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.428667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.428843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.429037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.429062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.429237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.429439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.429463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.429643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.429820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.429844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.430002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.430183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.430207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.430384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.430567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.430591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.430771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.430962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.430992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.431168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.431308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.431332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.431520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.431695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.431719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.431899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.432048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.432072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.432248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.432443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.432466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.432647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.432796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.432820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.433005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.433173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.433198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.433380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.433535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.433559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.433761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.433909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.433934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.434150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.434323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.434347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.434527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.434737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.434761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.434943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.435091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.435115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.435260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.435462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.435486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.435663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.435827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.435851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.436036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.436209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.436233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.436391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.436528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.436552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.436729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.436903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.436928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.437113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.437266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.437290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.437497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.437699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.437724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.437904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.438084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.438108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.438285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.438438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.438462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.438621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.438799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.438823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.438978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.439132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.439156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.439336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.439517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.439541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.439727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.439882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.742 [2024-07-14 04:02:23.439908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.742 qpair failed and we were unable to recover it. 00:30:04.742 [2024-07-14 04:02:23.440099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.440278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.440303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.440510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.440691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.440715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.440891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.441069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.441093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.441299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.441476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.441500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.441670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.441825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.441850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.442037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.442213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.442237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.442450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.442633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.442657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.442812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.442961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.442986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.443130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.443330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.443354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.443534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.443715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.443739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.443892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.444072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.444097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.444275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.444455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.444480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.444656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.444829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.444853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.445063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.445238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.445263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.445442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.445618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.445642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.445841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.446032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.446057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.446235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.446416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.446440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.446619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.446768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.446792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.446996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.447172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.447197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.447397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.447597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.447621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.447764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.447944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.447969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.448174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.448331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.448358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.448548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.448748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.448772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.448929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.449114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.449138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.449295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.449472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.449496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.449700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.449880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.449905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.450061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.450234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.450263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.450466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.450642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.450667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.450845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.451034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.451059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.451250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.451422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.451446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.743 [2024-07-14 04:02:23.451626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.451783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.743 [2024-07-14 04:02:23.451807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.743 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.451990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.452193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.452218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.452421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.452630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.452654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.452860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.453019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.453043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.453217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.453391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.453415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.453589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.453767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.453793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.453997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.454178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.454203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.454393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.454549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.454574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.454755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.454914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.454939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.455123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.455271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.455296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.455441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.455613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.455637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.455816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.455991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.456018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.456179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.456383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.456407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.456560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.456713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.456736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.456915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.457101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.457125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.457279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.457481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.457506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.457683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.457823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.457848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.458041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.458194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.458219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.458419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.458618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.458643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.458800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.459002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.459028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.459178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.459360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.459384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.459562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.459767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.459792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.459974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.460134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.460158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.460360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.460571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.460596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.460771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.460944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.460969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.461170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.461344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.461368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.461542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.461746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.461770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.461950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.462130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.462155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.462313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.462494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.462519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.462677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.462881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.462906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.463058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.463261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.463285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.744 qpair failed and we were unable to recover it. 00:30:04.744 [2024-07-14 04:02:23.463466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.744 [2024-07-14 04:02:23.463643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.463667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.463878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.464055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.464079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.464263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.464462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.464486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.464664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.464838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.464862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.465050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.465227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.465251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.465426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.465580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.465605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.465753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.465932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.465962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.466137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.466311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.466335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.466513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.466725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.466749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.466951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.467154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.467179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.467359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.467537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.467561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.467710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.467914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.467939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.468120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.468273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.468297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.468472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.468621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.468645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.468819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.469001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.469026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.469198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.469371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.469395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.469584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.469762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.469786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.469997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.470177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.470202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.470411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.470584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.470608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.470815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.470984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.471009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.471187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.471386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.471410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.471584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.471761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.471785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.471954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.472159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.472183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.472359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.472564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.472588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.472777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.472928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.472953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.473124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.473304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.473328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.473528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.473683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.473707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.473913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.474085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.474109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.474288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.474467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.474491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.474668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.474847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.474877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.745 [2024-07-14 04:02:23.475059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.475236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.745 [2024-07-14 04:02:23.475261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.745 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.475465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.475638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.475663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.475822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.476004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.476029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.476208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.476378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.476402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.476558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.476731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.476755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.476928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.477099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.477123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.477304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.477511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.477535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.477694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.477873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.477898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.478105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.478279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.478303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.478464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.478613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.478637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.478839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.479037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.479062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.479242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.479442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.479466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.479649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.479800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.479824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.479989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.480193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.480218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.480399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.480547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.480571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.480773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.480978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.481003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.481182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.481350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.481374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.481526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.481706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.481731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.481910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.482120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.482144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.482317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.482494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.482518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.482690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.482875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.482900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.483101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.483258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.483282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.483433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.483631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.483655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.483810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.483993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.484019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.484193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.484343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.484367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.484538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.484714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.484737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.746 qpair failed and we were unable to recover it. 00:30:04.746 [2024-07-14 04:02:23.484912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.485064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.746 [2024-07-14 04:02:23.485090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.485267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.485447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.485475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.485657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.485797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.485821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.486024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.486207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.486231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.486404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.486604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.486628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.486806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.486993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.487018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.487174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.487350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.487375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.487555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.487737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.487762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.487940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.488145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.488169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.488347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.488535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.488559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.488738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.488890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.488915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.489094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.489300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.489324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.489489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.489634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.489658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.489857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.490042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.490066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.490270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.490442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.490467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.490620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.490795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.490820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.490975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.491131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.491155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.491311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.491515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.491539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.491685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.491885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.491910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.492057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.492233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.492257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.492435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.492636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.492661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.492837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.493056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.493081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.493240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.493446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.493470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.493652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.493857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.493889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.494072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.494245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.494269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.494445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.494620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.494644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.494819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.494976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.495003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.495208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.495381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.495406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.495599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.495772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.495796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.495976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.496131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.496155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.496328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.496502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.496526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.747 qpair failed and we were unable to recover it. 00:30:04.747 [2024-07-14 04:02:23.496677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-07-14 04:02:23.496877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.496902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.497100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.497278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.497302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.497451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.497630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.497654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.497828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.498015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.498040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.498215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.498391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.498415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.498617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.498767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.498791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.498955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.499110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.499136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.499287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.499488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.499513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.499717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.499871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.499896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.500047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.500222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.500246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.500402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.500571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.500595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.500737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.500921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.500947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.501126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.501316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.501341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.501496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.501695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.501719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.501896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.502076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.502100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.502249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.502452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.502476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.502655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.502825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.502850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.503019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.503199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.503223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.503399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.503575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.503599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.503778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.503982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.504008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.504186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.504366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.504391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.504567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.504770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.504799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.504952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.505160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.505184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.505332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.505533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.505557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.505758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.505939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.505964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.506113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.506285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.506310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.506467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.506614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.506638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.506796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.506997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.507022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.507203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.507354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.507378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.507538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.507693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.507717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.507872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.508012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.748 [2024-07-14 04:02:23.508036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.748 qpair failed and we were unable to recover it. 00:30:04.748 [2024-07-14 04:02:23.508216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.508388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.508411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.508569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.508772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.508797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.508973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.509154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.509178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.509326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.509475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.509499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.509702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.509880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.509906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.510059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.510214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.510238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.510449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.510626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.510650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.510834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.511024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.511049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.511226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.511416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.511440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.511614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.511819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.511844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.512038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.512213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.512237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.512451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.512655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.512679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.512855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.513010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.513035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.513184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.513340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.513364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.513546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.513718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.513743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.513893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.514071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.514096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.514313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.514487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.514511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.514652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.514830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.514854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.515040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.515247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.515272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.515452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.515634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.515658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.515809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.515996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.516021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.516203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.516378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.516403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.516606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.516806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.516830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.517011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.517209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.517234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.517409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.517590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.517614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.517790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.517943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.517968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.518148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.518316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.518339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.518510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.518660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.518684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.518888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.519069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.519093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.519274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.519455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.519480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.749 [2024-07-14 04:02:23.519662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.519839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.749 [2024-07-14 04:02:23.519863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.749 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.520032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.520182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.520211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.520382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.520528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.520552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.520701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.520886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.520911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.521065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.521266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.521290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.521473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.521644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.521668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.521847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.522005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.522030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.522200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.522371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.522395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.522572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.522724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.522748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.522897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.523102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.523127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.523304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.523481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.523505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.523658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.523830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.523855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.524018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.524193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.524217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.524371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.524542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.524567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.524747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.524924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.524949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.525104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.525254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.525279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.525421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.525637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.525661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.525811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.525966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.525991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.526167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.526367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.526392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.526569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.526744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.526768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.526927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.527075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.527100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.527258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.527434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.527458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.527616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.527796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.527820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.527962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.528107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.528132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.528309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.528485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.528509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.528690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.528840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.528877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.529064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.529236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.529260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.750 qpair failed and we were unable to recover it. 00:30:04.750 [2024-07-14 04:02:23.529435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.750 [2024-07-14 04:02:23.529584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.529608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.529784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.529959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.529984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.530134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.530277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.530302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.530507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.530654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.530678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.530854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.531043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.531068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.531237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.531410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.531434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.531610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.531812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.531836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.532023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.532222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.532246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.532429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.532602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.532626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.532832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.533039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.533064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.533242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.533424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.533450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.533627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.533797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.533821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.533988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.534168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.534193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.534404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.534606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.534631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.534771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.534921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.534946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.535090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.535301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.535325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.535502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.535713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.535737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.535934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.536111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.536136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.536343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.536518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.536543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.536699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.536858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.536896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.537073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.537211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.537235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.537393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.537543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.537567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.537767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.537945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.537971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.538125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.538298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.538322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.538469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.538612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.538636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.538812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.538990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.539020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.539194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.539348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.539372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.539526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.539729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.539754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.539931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.540074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.540099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.540303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.540453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.540479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.540654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.540823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.751 [2024-07-14 04:02:23.540847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.751 qpair failed and we were unable to recover it. 00:30:04.751 [2024-07-14 04:02:23.541006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.541181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.541206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.541353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.541495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.541519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.541697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.541910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.541935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.542110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.542258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.542282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.542457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.542599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.542623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.542830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.543018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.543044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.543252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.543398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.543423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.543574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.543754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.543781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.543959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.544163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.544188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.544335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.544515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.544540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.544693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.544862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.544906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.545063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.545235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.545260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.545471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.545624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.545648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.545824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.546010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.546036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.546218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.546400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.546424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.546601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.546801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.546825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.547012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.547161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.547185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.547366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.547537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.547562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.547765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.547935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.547960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.548117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.548294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.548329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.548500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.548666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.548697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.548884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.549100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.549125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.549335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.549533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.549557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.549713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.549885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.549910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.550088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.550259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.550285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.550464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.550645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.550670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.550843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.551009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.551035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.551210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.551415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.551450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.551628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.551783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.551807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.551966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.552153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.552178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.752 qpair failed and we were unable to recover it. 00:30:04.752 [2024-07-14 04:02:23.552357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.552539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.752 [2024-07-14 04:02:23.552566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.552732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.552910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.552936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.553115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.553266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.553292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.553476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.553648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.553672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.553845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.554010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.554035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.554247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.554396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.554421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.554596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.554773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.554797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.554968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.555141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.555165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.555344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.555488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.555513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.555690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.555876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.555902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.556095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.556300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.556324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.556531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.556711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.556736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.556954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.557142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.557167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.557340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.557527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.557551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.557728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.557935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.557960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.558137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.558336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.558365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.558516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.558686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.558710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.558861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.559047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.559071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.559261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.559412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.559436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.559591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.559794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.559819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.560003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.560157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.560182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.560382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.560530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.560554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.560707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.560889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.560924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.561104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.561303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.561328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.561503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.561714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.561739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.561924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.562107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.562141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.562327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.562517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.562545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.562720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.562894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.562925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.563132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.563309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.563333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.563489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.563643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.563668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.563847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.564029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.564054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.753 qpair failed and we were unable to recover it. 00:30:04.753 [2024-07-14 04:02:23.564242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.753 [2024-07-14 04:02:23.564398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.564423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.564609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.564793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.564818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.564979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.565136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.565161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.565363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.565535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.565559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.565767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.565912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.565937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.566132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.566312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.566337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.566508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.566686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.566711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.566893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.567055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.567080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.567260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.567465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.567489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.567642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.567825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.567849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.568005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.568155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.568179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.568345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.568522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.568546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.568722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.568904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.568929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.569086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.569291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.569315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.569483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.569665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.569689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.569834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.570003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.570029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.570179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.570383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.570408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.570584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.570787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.570812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.570999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.571155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.571180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.571337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.571550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.571574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.571729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.571890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.571922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.572104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.572281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.572305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.572450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.572632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.572656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.572837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.573042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.573067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.573294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.573469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.573494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.573649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.573792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.573820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.754 [2024-07-14 04:02:23.573981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.574125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.754 [2024-07-14 04:02:23.574150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.754 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.574370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.574547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.574572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.574756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.574960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.574986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.575166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.575368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.575393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.575597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.575798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.575823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.575976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.576127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.576160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.576361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.576559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.576584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.576753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.576954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.576979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.577132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.577283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.577308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.577492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.577670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.577696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.577908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.578072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.578097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.578285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.578456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.578480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.578654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.578797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.578821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.578977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.579133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.579157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.579370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.579548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.579572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.579774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.579926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.579954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.580111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.580284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.580309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.580463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.580615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.580639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.580813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.580967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.580992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.581150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.581310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.581334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.581514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.581719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.581743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.581900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.582054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.582080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.582243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.582417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.582441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.582618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.582793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.582818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.582988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.583164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.583190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.583362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.583537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.583562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.583718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.583877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.583904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.584099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.584275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.584299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.584474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.584649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.584674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.584827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.584989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.585015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.585169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.585371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.585398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.755 qpair failed and we were unable to recover it. 00:30:04.755 [2024-07-14 04:02:23.585584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.755 [2024-07-14 04:02:23.585787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.585811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.585987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.586141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.586165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.586311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.586516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.586540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.586694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.586838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.586862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.587055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.587243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.587268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.587447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.587590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.587615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.587763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.587921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.587947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.588127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.588334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.588359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.588564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.588747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.588771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.588919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.589077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.589101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.589278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.589450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.589475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.589740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.589959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.589985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.590163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.590360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.590385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.590589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.590738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.590762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.590911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.591090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.591115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.591296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.591572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.591597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.591753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.591963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.591989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.592144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.592316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.592341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.592488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.592663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.592688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.592861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.593025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.593054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.593209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.593356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.593380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.593583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.593766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.593790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.593935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.594090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.594114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.594321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.594503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.594527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.594703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.594903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.594929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.595108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.595260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.595285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.595437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.595588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.595612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.595820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.596003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.596030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.596209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.596357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.596381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.596637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.596837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.596862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.756 qpair failed and we were unable to recover it. 00:30:04.756 [2024-07-14 04:02:23.597028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.756 [2024-07-14 04:02:23.597178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.597202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.597375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.597555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.597581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.597781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.597984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.598010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.598193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.598383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.598406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.598628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.598801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.598826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.599068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.599253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.599278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.599483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.599636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.599661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.599863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.600029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.600054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.600232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.600389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.600413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.600618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.600833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.600857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.601058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.601255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.601280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.601463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.601673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.601697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.601877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.602054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.602078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.602289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.602459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.602483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.602624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.602794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.602818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.603013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.603153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.603178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.603384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.603566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.603590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.603757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.604015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.604040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.604244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.604422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.604446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.604590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.604766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.604790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.604954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.605136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.605161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.605336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.605534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.605559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.605743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.605925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.605950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.606122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.606271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.606296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.606472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.606643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.606667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.606846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.607025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.607051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.607250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.607457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.607480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.607721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.607877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.607902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.608080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.608297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.608321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.608499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.608651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.608676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.608851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.609045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.609070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.757 qpair failed and we were unable to recover it. 00:30:04.757 [2024-07-14 04:02:23.609233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.757 [2024-07-14 04:02:23.609401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.609425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.609643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.609850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.609883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.610084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.610257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.610281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.610432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.610611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.610635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.610812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.610962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.610987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.611162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.611374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.611398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.611600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.611754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.611779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.611930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.612083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.612107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.612314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.612493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.612517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.612691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.612876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.612905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.613061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.613242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.613267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.613448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.613602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.613626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.613767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.613922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.613948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.614152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.614302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.614326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.614474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.614682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.614707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.614887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.615072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.615097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.615300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.615473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.615497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.615650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.615852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.615883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.616042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.616190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.616215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.616392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.616566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.616591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.616772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.616980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.617005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.617195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.617371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.617395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.617580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.617744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.617770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.617957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.618157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.618182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.618356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.618507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.618531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.618751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.618925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.618951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.619132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.619337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.619362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.619521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.619699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.619723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.619926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.620115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.620141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.620321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.620496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.620520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.758 qpair failed and we were unable to recover it. 00:30:04.758 [2024-07-14 04:02:23.620696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.758 [2024-07-14 04:02:23.620900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.620925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.621098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.621249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.621274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.621452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.621661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.621685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.621835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.622036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.622062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.622266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.622471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.622495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.622679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.622855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.622890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.623048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.623248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.623273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.623453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.623664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.623688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.623895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.624045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.624069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.624263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.624437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.624462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.624645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.624826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.624851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.625033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.625204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.625229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.625429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.625573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.625598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.625770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.625942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.625967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.626146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.626328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.626353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.626527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.626702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.626726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.626923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.627081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.627108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.627268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.627448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.627473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.627678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.627826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.627850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.628039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.628191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.628215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.628397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.628603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.628635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.628816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.628965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.628990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.629169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.629354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.629379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.629574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.629731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.629757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.629965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.630170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.630195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.630344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.630528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.630567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.630777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.630952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.759 [2024-07-14 04:02:23.630977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.759 qpair failed and we were unable to recover it. 00:30:04.759 [2024-07-14 04:02:23.631154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.631301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.631325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.631547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.631725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.631749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.631928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.632136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.632160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.632352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.632551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.632575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.632781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.632979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.633004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.633190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.633368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.633392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.633544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.633718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.633757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.633976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.634157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.634182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.634337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.634530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.634554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.634769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.635016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.635056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.635222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.635422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.635447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.636449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.636645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.636670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.636877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.637060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.637086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.637278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.637481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.637506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.637692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.637894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.637928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.638085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.638240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.638264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.638451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.638628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.638653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.638835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.638995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.639021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.639200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.639350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.639382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.639546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.639749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.639773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.639986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.640135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.640159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.640330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.640471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.640495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.640669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.640815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.640839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.641000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.641176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.641200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.641383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.641582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.641606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.641813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.641957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.641982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.642137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.642287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.642312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.642488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.642688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.642712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.642886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.643024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.643048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.643204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.643408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.643432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.760 qpair failed and we were unable to recover it. 00:30:04.760 [2024-07-14 04:02:23.643630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.760 [2024-07-14 04:02:23.643796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.643820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.644015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.644171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.644195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.644373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.644512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.644536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.644718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.644921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.644945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.645092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.645302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.645333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.645518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.645692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.645716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.645938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.646088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.646112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.646253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.646426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.646450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.646625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.646774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.646798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.646967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.647114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.647138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.647313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.647469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.647493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.647679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.647860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.647892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.648066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.648219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.648243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.648438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.648645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.648670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.648857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.649045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.649073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.649297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.649472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.649497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.649651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.649822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.649846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.650022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.650172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.650196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.650354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.650525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.650549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.650721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.650896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.650922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.651067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.651223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.651247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.651420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.651562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.651587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.651762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.651913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.651938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.652120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.652292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.652316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.652460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.652661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.652685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.652894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.653052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.653077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.653293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.653443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.653467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.653640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.653846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.653877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.654028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.654177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.654202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.654409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.654588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.654612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.654796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.654972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.761 [2024-07-14 04:02:23.654997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.761 qpair failed and we were unable to recover it. 00:30:04.761 [2024-07-14 04:02:23.655138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.655294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.655317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.655484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.655686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.655710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.655918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.656089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.656113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.656257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.656413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.656438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.656670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.656844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.656889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.657075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.657244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.657269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.657572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.657719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.657743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.657960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.658114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.658139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.658312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.658489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.658514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.658669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.658844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.658875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.659032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.659239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.659263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.659413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.659598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.659636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.659824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.659983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.660009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.660161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.660338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.660363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.660545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.660731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.660755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.660960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.661143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.661178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.661355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.661504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.661528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.661707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.661897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.661923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.662133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.662305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.662329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.662514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.662663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.662687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.662890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.663083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.762 [2024-07-14 04:02:23.663109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:04.762 qpair failed and we were unable to recover it. 00:30:04.762 [2024-07-14 04:02:23.663314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.663494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.663519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.663701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.663881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.663906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.664088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.664261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.664285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.664464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.664641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.664667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.664825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.665050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.665076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.665287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.665465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.665490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.665657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.665838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.665864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.666056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.666208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.666233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.666393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.666545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.666570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.666776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.666964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.666989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.667201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.667387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.667412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.667589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.667759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.667783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.667988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.668173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.668199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.668345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.668547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.668575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.668756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.668896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.668921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.669095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.669302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.669326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.669470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.669682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.669706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.669860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.670022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.670048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.670227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.670429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.670453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.670628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.670810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.670834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.671028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.671229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.671254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.671437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.671589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.671613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.671785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.672004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.672031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.672186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.672364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.672388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.672544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.672744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.672769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.672950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.673158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.673184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.673362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.673503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.673527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.673703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.673886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.673912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.674071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.674248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.674273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.674478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.674681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.674705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.674915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.675064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.675088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.675269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.675446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.675471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.675650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.675826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.675851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.676041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.676196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.676221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.676404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.676585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.676609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.676791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.676949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.676975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.677122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.677299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.029 [2024-07-14 04:02:23.677324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.029 qpair failed and we were unable to recover it. 00:30:05.029 [2024-07-14 04:02:23.677526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.677708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.677732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.677906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.678085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.678110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.678249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.678422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.678446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.678599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.678737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.678762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.678971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.679152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.679178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.679337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.679484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.679509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.679656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.679860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.679891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.680070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.680212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.680237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.680446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.680624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.680649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.680799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.680950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.680976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.681115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.681288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.681313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.681462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.681640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.681664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.681848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.682002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.682028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.682202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.682377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.682401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.682571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.682776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.682800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.682975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.683151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.683176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.683365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.683542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.683567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.683744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.683941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.683970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.684141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.684316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.684341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.684521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.684726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.684751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.684966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.685148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.685173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.685376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.685529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.685553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.685732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.685912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.685937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.686116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.686289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.686314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.686491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.686633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.686658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.686848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.687005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.687031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.687233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.687371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.687395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.687564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.687742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.687766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.687951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.688134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.688159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.688331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.688475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.688499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.688646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.688823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.688848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.689042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.689248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.689273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.689457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.689663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.689688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.689873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.690072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.690097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.690272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.690419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.690443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.690617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.690786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.690811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.690996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.691175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.691200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.691407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.691609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.691634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.691816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.691965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.691990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.692188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.692394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.692419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.692574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.692722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.692747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.692910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.693086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.693111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.693257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.693435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.693460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.693603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.693808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.693832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.694019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.694191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.694216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.694363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.694546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.694571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.694723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.694880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.694906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.695078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.695263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.695287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.695439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.695613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.695637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.695817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.695967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.695992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.696169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.696345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.696369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.696546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.696752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.696777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.696956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.697165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.697190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.697343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.697545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.697570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.697748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.697901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.697926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.698093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.698275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.698300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.698473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.698646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.698670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.698854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.699020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.699044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.699220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.699400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.699424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.699573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.699778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.699803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.699955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.700154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.700179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.700357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.700530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.030 [2024-07-14 04:02:23.700554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.030 qpair failed and we were unable to recover it. 00:30:05.030 [2024-07-14 04:02:23.700733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.700890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.700916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.701127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.701267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.701292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.701444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.701613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.701638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.701788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.701996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.702025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.702203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.702381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.702405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.702585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.702739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.702763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.702949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.703106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.703136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.703338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.703528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.703552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.703713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.703860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.703898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.704133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.704333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.704358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.704537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.704690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.704717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.704897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.705105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.705129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.705312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.705466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.705490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.705646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.705828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.705853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.706046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.706247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.706271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.706459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.706661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.706685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.706829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.707027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.707052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.707232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.707386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.707412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.707566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.707708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.707733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.707955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.708109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.708138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.708359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.708508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.708533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.708709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.708857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.708902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.709100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.709304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.709328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.709542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.709716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.709740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.709907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.710054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.710078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.710280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.710421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.710445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.710603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.710806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.710830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.711023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.711202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.711227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.711405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.711593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.711617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.711769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.711915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.711940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.712124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.712274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.712300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.712512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.712657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.712681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.712858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.713044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.713070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.713282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.713429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.713453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.713633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.713782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.713806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.713987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.714171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.714195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.714344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.714525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.714549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.714729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.714880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.714905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.715109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.715257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.715281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.715457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.715618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.715642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.715817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.715974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.715999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.716185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.716360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.716384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.716531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.716732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.716756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.716909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.717114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.717139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.717338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.717510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.717535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.717684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.717854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.717887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.718073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.718217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.718241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.718401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.718554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.718579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.718753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.718893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.718920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.719085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.719289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.719313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.719498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.719644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.719668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.719817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.720000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.720025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.720204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.720409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.720433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.720615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.720754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.720779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.031 qpair failed and we were unable to recover it. 00:30:05.031 [2024-07-14 04:02:23.720989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.031 [2024-07-14 04:02:23.721168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.721192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.721373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.721516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.721540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.721691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.721832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.721857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.722036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.722176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.722205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.722407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.722605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.722629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.722778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.722961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.722986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.723140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.723317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.723342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.723550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.723723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.723747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.723898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.724101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.724126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.724273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.724429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.724453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.724622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.724790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.724814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.725011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.725201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.725225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.725410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.725558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.725582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.725785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.725997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.726022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.726204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.726346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.726370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.726521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.726691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.726715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.726872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.727082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.727106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.727258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.727433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.727457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.727635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.727785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.727810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.727964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.728143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.728168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.728340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.728514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.728538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.728709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.728880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.728905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.729109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.729261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.729285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.729453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.729629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.729654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.729812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.730004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.730029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.730187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.730336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.730360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.730533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.730734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.730758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.730957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.731130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.731158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.731325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.731502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.731525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.731731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.731910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.731936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.732111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.732284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.732308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.732490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.732695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.732719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.732927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.733137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.733161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.733309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.733457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.733481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.733654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.733832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.733857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.734076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.734253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.734277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.734431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.734633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.734657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.734838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.735035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.735061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.735236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.735412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.735437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.735639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.735815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.735839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.736027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.736201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.736225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.736383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.736558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.736583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.736762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.736963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.736989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.737145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.737315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.737339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.737489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.737663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.737691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.737876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.738053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.738077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.738290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.738492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.738517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.738700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.738875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.738901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.739089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.739269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.739294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.739497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.739678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.739702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.739846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.740060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.740084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.740238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.740388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.740412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.740585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.740756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.740780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.740963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.741149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.741188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.741370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.741578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.741602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.741786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.741958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.741983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.742140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.742302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.742327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.742501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.742679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.742703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.742926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.743110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.743136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.743320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.743473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.743497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.743647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.743875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.743900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.744048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.744199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.744223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.744426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.744600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.744624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.744776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.744933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.744958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.745135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.745335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.745359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.745542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.745683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.745707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.745895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.746075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.746101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.032 qpair failed and we were unable to recover it. 00:30:05.032 [2024-07-14 04:02:23.746260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.032 [2024-07-14 04:02:23.746478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.746502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.746685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.746889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.746913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.747088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.747289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.747313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.747456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.747602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.747626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.747808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.747983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.748008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.748162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.748331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.748356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.748556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.748730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.748755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.748920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.749099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.749123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea8610 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.749299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.749517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.749544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.749731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.749930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.749956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.750135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.750312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.750336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.750518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.750705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.750730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.750913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.751090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.751114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.751294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.751478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.751504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.751664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.751876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.751901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.752074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.752242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.752267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.752444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.752657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.752683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.752835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.753050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.753075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.753259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.753440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.753465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.753641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.753794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.753819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.754021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.754181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.754208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.754388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.754562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.754586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.754776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.754951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.754976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.755140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.755355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.755379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.755565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.755771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.755796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.755973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.756158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.756182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.756381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.756554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.756578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.756762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.756935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.756960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.757118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.757278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.757302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.757489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.757636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.757660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.757811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.757988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.758013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.758162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.758339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.758364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.758520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.758726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.758751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.758958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.759114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.759137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.759288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.759468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.759492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.759715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.759862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.759891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.760071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.760222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.760247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.760404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.760610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.760634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.760785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.760972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.760996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.761203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.761379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.761406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.761590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.761771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.761795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.761985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.762154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.762179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.762362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.762513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.762538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.762742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.762916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.762942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.763095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.763272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.763296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.763476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.763633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.763659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.763860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.764022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.764046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.764225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.764399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.764423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.764628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.764778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.764803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.764962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.765167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.765191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.765348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.765523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.765547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.033 qpair failed and we were unable to recover it. 00:30:05.033 [2024-07-14 04:02:23.765727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.033 [2024-07-14 04:02:23.765883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.765909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.766116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.766267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.766292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.766498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.766654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.766678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.766858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.767041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.767065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.767273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.767451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.767475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.767652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.767798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.767822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.767982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.768132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.768156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.768333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.768510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.768538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.768717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.768896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.768921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.769072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.769252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.769276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.769457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.769613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.769638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.769823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.770033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.770057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.770252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.770426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.770450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.770613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.770794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.770817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.770977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.771159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.771185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2510942 Killed "${NVMF_APP[@]}" "$@" 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.771342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.771520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 04:02:23 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:05.034 [2024-07-14 04:02:23.771545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 04:02:23 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:05.034 [2024-07-14 04:02:23.771721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 04:02:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:05.034 [2024-07-14 04:02:23.771900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.771930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 04:02:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 04:02:23 -- common/autotest_common.sh@10 -- # set +x 00:30:05.034 [2024-07-14 04:02:23.772111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.772291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.772314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.772493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.772641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.772665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.772818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.773000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.773025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.773173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.773351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.773375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.773553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.773734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.773758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.773977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.774155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.774179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.774355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.774557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.774582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.774760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.774923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.774950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.775131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.775305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.775330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.775491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.775687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.775711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.775891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 04:02:23 -- nvmf/common.sh@469 -- # nvmfpid=2511639 00:30:05.034 [2024-07-14 04:02:23.776075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.776101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 04:02:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:05.034 04:02:23 -- nvmf/common.sh@470 -- # waitforlisten 2511639 00:30:05.034 [2024-07-14 04:02:23.776300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 04:02:23 -- common/autotest_common.sh@819 -- # '[' -z 2511639 ']' 00:30:05.034 [2024-07-14 04:02:23.776480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.776507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 04:02:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.034 [2024-07-14 04:02:23.776712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 04:02:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:05.034 [2024-07-14 04:02:23.776917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.776942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 04:02:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.034 04:02:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:05.034 [2024-07-14 04:02:23.777122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 04:02:23 -- common/autotest_common.sh@10 -- # set +x 00:30:05.034 [2024-07-14 04:02:23.777301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.777328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.777536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.777701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.777724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.777880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.778030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.778054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.778235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.778435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.778460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.778645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.778829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.778856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.779027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.779184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.779209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.779382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.779557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.779581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.779785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.779944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.779971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.780158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.780338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.780363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.780518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.780697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.780723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.780913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.781093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.781117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.781320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.781497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.781522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.781702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.781877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.781901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.782084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.782254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.782278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.782470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.782618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.782647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.782823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.783025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.783052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.783236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.783440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.783464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.783644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.783823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.783848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.784039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.784226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.784250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.784399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.784553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.784577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.784743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.784952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.784977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.785160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.785313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.785337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.785485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.785663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.785688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.785876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.786057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.786081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.786253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.786428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.786458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.786638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.786813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.786839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.787054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.787230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.787254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.787436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.787610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.787634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.787813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.788016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.788041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.788217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.788369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.788393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.788571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.788748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.788773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.034 [2024-07-14 04:02:23.788950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.789134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.034 [2024-07-14 04:02:23.789159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.034 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.789319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.789490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.789516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.789692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.789874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.789900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.790075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.790262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.790290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.790494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.790669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.790694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.790841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.791000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.791026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.791203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.791356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.791380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.791561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.791761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.791785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.791961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.792138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.792162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.792310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.792482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.792507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.792683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.792894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.792919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.793077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.793249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.793274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.793478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.793684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.793709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.793887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.794072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.794097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.794261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.794433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.794458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.794639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.794788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.794813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.794973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.795148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.795173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.795353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.795528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.795553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.795707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.795886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.795911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.796115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.796292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.796316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.796528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.796705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.796729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.796909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.797054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.797079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.797227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.797413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.797437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.797653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.797853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.797885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.798062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.798234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.798258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.798470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.798645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.798669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.798855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.799062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.799087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.799273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.799454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.799479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.799657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.799800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.799825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.800053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.800224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.800248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.800452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.800632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.800656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.800880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.801065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.801089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.801267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.801446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.801472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.801635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.801808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.801832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.802051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.802204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.802229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.802409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.802585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.802610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.802787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.802967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.802993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.803152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.803302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.803326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.803515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.803692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.803719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.803882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.804055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.804079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.804294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.804471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.804496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.804675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.804848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.804879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.805063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.805250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.805274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.805474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.805613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.805637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.805824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.806007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.806032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.806243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.806393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.806418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.806596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.806769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.806793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.806974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.807182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.807206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.807355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.807531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.807555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.807739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.807955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.807981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.808147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.808302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.808327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.808508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.808681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.808706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.808892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.809046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.809071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.035 qpair failed and we were unable to recover it. 00:30:05.035 [2024-07-14 04:02:23.809289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.035 [2024-07-14 04:02:23.809466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.809492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.809654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.809859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.809891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.810098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.810280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.810306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.810479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.810679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.810703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.810874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.811084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.811109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.811337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.811513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.811537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.811716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.811895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.811921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.812128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.812271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.812296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.812483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.812651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.812676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.812828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.813026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.813052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.813255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.813438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.813463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.813671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.813853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.813885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.814095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.814252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.814277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.814458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.814640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.814664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.814824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.815015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.815040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.815190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.815369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.815394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.815574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.815720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.815744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.815952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.816127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.816152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.816338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.816496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.816523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.816730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.816884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.816909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.817070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.817247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.817271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.817432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.817605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.817630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.817804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.817995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.818021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.818174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.818351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.818375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.818592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.818796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.818820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.818986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.819167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.819192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.819394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.819568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.819595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.819772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.819925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.819952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.820133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.820310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.820334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.820513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.820687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.820712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.820886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.821069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.821093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.821275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.821461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.821485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.821639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.821814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.821838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.821999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:05.036 [2024-07-14 04:02:23.822036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.822072] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.036 [2024-07-14 04:02:23.822216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.822240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.822401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.822578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.822601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.822761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.822910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.822936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.823144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.823330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.823355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.823529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.823684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.823708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.823912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.824087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.824111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.824261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.824413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.824437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.824655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.824842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.824872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.825026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.825176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.825202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.825358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.825539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.825563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.825767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.825917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.825943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.826096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.826277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.826302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.826484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.826664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.826689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.826876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.827082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.827108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.827300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.827502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.827526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.827708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.827883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.827908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.828071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.828254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.828279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.828462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.828618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.828642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.828819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.829001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.829026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.829214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.829397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.829421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.829610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.829814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.829849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.830074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.830258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.036 [2024-07-14 04:02:23.830283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.036 qpair failed and we were unable to recover it. 00:30:05.036 [2024-07-14 04:02:23.830462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.830646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.830671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.830842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.831027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.831052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.831248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.831422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.831447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.831653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.831811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.831835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.832019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.832173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.832199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.832385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.832559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.832587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.832736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.832914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.832940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.833125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.833299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.833324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.833509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.833682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.833707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.833873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.834054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.834079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.834270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.834419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.834444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.834653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.834832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.834858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.835034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.835246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.835271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.835437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.835616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.835640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.835798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.835959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.835984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.836130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.836307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.836338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.836544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.836695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.836721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.836877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.837038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.837063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.837264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.837451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.837476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.837654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.837811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.837836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.838038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.838196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.838220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.838406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.838596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.838620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.838800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.838957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.838983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.839178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.839327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.839351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.839500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.839646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.839670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.839820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.839998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.840028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.840208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.840384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.840409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.840588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.840769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.840794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.840984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.841139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.841163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.841324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.841507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.841531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.841722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.841930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.841956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.842104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.842245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.842270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.842421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.842592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.842617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.842800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.842952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.842977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.843129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.843296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.843321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.843490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.843669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.843697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.843878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.844054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.844079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.844234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.844445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.844470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.844628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.844810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.844836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.845031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.845181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.845205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.845347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.845509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.845533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.845714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.845899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.845925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.846077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.846286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.846310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.846470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.846671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.846696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.846884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.847073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.847098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.847276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.847478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.847503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.847681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.847864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.847905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.848062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.848213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.848240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.848413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.848568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.848593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.848773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.848959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.848985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.849145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.849345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.849370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.849549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.849701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.849726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.849909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.850062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.850087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.850236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.850422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.850447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.850625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.850800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.850824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.850994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.851183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.851207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.851394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.851575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.851600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.851775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.851989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.852014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.852170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.852347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.852373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.852562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.852704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.852730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.852912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.853070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.853094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.037 qpair failed and we were unable to recover it. 00:30:05.037 [2024-07-14 04:02:23.853280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.037 [2024-07-14 04:02:23.853423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.853447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.853624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.853825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.853849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.854035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.854206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.854230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.854411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.854590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.854617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.854780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.854945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.854971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.855133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.855290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.855314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.855520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.855669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.855695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.855878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.856058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.856083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.856265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.856442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.856468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.856662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.856840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.856873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.857061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.857279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.857304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.857489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.857665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.857691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.857910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.858088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.858112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.858321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.858527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.858552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.858766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.858925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.858950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.859110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.859287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.859312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.859517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.859720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.859744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.859900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.038 [2024-07-14 04:02:23.860054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.860081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.860289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.860441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.860466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.860648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.860854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.860884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.861092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.861269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.861293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.861439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.861617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.861641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.861792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.861946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.861973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.862125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.862331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.862356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.862534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.862722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.862747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.862927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.863106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.863131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.863321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.863498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.863523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.863677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.863863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.863894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.864052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.864204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.864228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.864409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.864556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.864586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.864740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.864920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.864946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.865137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.865309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.865333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.865514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.865692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.865716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.865928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.866102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.866126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.866312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.866485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.866509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.866665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.866842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.866871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.867023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.867170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.867195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.867348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.867524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.867549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.867750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.867930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.867955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.868111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.868263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.868287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.868440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.868618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.868642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.868791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.868989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.869014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.869218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.869366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.869390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.869565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.869719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.869742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.869948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.870098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.870123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.870308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.870507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.870530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.870709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.870864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.870896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.871050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.871201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.871226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.871417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.871589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.871614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.871789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.871966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.871991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.872168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.872353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.872376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.872535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.872718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.872742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.872924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.873103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.873127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.038 qpair failed and we were unable to recover it. 00:30:05.038 [2024-07-14 04:02:23.873311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.038 [2024-07-14 04:02:23.873488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.873513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.873694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.873844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.873876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.874063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.874271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.874295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.874446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.874603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.874628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.874833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.875017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.875042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.875224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.875400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.875424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.875604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.875755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.875779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.875929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.876073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.876097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.876254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.876434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.876459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.876613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.876791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.876815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.877021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.877172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.877198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.877406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.877584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.877608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.877794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.877993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.878017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.878161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.878340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.878365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.878566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.878720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.878744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.878906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.879056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.879079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.879244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.879448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.879472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.879683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.879832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.879855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.880024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.880230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.880254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.880402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.880552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.880576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.880753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.880894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.880919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.881071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.881254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.881279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.881429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.881603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.881627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.881801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.881951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.881977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.882181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.882328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.882353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.882516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.882692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.882717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.882885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.883063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.883087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.883268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.883448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.883471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.883643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.883824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.883850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.884008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.884211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.884236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.884387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.884553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.884578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.884751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.884923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.884949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.885124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.885339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.885363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.885511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.885682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.885706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.885860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.886057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.886082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.886234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.886414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.886438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.886611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.886758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.886783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.886995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.887151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.887177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.887356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.887537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.887561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.887712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.887859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.887889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.888039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.888242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.888267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.888442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.888648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.888672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.888825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.889021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.889046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.889254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.889405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.889430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.889640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.889820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.889846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.890029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.890192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.890217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.890390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.890593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.890618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.890797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.890977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.891002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.891178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.891325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.891351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.891556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.891708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.891734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.891877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.892048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.892072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.892228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.892408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.892434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.892621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.892833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.892859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.893021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.893173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.893197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.893377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.893552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.893577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.893782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.893960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.893986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.039 qpair failed and we were unable to recover it. 00:30:05.039 [2024-07-14 04:02:23.894166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.039 [2024-07-14 04:02:23.894313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.894338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.894519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.894563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.040 [2024-07-14 04:02:23.894727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.894752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.894912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.895088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.895113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.895315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.895466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.895492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.895649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.895801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.895826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.896042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.896219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.896243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.896422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.896577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.896602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.896757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.896937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.896962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.897112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.897287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.897312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.897493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.897643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.897667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.897815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.897973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.897998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.898178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.898331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.898357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.898533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.898678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.898702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.898882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.899034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.899060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.899236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.899386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.899411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.899613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.899762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.899786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.899962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.900115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.900141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.900345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.900522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.900546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.900753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.900901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.900927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.901107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.901284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.901308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.901463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.901665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.901690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.901876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.902030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.902054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.902234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.902389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.902416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.902596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.902772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.902797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.902974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.903155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.903179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.903360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.903536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.903561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.903719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.903904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.903929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.904109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.904258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.904284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.904436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.904642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.904666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.904819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.905004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.905029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.905210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.905389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.905414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.905617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.905773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.905800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.905975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.906151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.906175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.906330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.906504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.906529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.906709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.906917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.906943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.907123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.907310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.907335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.907607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.907798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.907823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.908010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.908190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.908214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.908366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.908516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.908540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.908720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.908877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.908902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.909058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.909204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.909228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.909417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.909562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.909588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.909795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.909947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.909971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.910188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.910365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.910390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.910544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.910694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.910720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.910879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.911034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.911058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.911230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.911386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.911414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.911625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.911775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.911799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.911956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.912139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.912164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.912346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.912524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.912548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.912722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.912901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.912928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.913101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.913251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.040 [2024-07-14 04:02:23.913275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.040 qpair failed and we were unable to recover it. 00:30:05.040 [2024-07-14 04:02:23.913488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.913661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.913685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.913845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.914043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.914068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.914251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.914457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.914481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.914632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.914809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.914834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.915050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.915205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.915234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.915439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.915618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.915642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.915796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.915971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.915996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.916157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.916307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.916331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.916483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.916662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.916687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.916875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.917048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.917073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.917226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.917400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.917424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.917576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.917752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.917777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.917948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.918123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.918148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.918304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.918458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.918482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.918658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.918836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.918871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.919060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.919212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.919236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.919438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.919614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.919639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.919791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.919969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.919996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.920146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.920358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.920384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.920532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.920690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.920715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.920891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.921072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.921095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.921273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.921424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.921449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.921657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.921864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.921895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.922070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.922247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.922271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.922426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.922630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.922660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.922807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.922961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.922986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.923137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.923308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.923332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.923480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.923685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.923709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.923861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.924042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.924067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.924217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.924376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.924400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.924604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.924770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.924794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.924954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.925138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.925164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.925346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.925516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.925540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.925698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.925887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.925914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.926081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.926286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.926310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.926501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.926711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.926736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.926889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.927041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.927066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.927243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.927414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.927438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.927581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.927726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.927753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.927934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.928142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.928167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.928326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.928516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.928540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.928716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.928875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.928900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.929084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.929256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.929281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.929456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.929633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.929657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.929806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.930009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.930035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.930218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.930394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.930419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.930619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.930764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.930789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.930968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.931144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.931171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.931321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.931500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.931525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.931712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.931889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.931932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.932089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.932297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.932322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.932472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.932619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.932645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.932839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.933024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.933049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.933228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.933382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.933407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.933583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.933728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.933752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.933963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.934112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.934137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.934318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.934492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.934519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.934701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.934876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.934902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.935052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.935224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.935248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.935424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.935574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.935600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.935751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.935931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.935957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.936138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.936286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.936310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.936514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.936660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.936687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.936874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.937024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.937049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.937225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.937404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.937428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.041 qpair failed and we were unable to recover it. 00:30:05.041 [2024-07-14 04:02:23.937636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.041 [2024-07-14 04:02:23.937780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.937804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.937998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.938174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.938199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.938380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.938533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.938558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.938708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.938898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.938924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.939107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.939256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.939282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.939480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.939683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.939707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.939857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.940014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.940038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.940218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.940370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.940396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.940603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.940777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.940801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.941006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.941212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.941237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.941397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.941572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.941597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.941806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.941962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.941988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.942168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.942346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.942370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.942569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.942720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.942744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.942909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.943071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.943098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.943276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.943428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.943452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.943715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.943918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.943943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.944121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.944271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.944297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.944474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.944651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.944675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.944850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.945045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.945071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.945257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.945408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.945433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.945604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.945809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.945834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.946000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.946159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.946183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.946335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.946514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.946539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.946714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.946896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.946921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.947099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.947244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.947268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.947425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.947598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.947623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.947773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.947923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.947948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.948105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.948249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.948273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.948447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.948608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.948632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.948802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.948979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.949005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.949155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.949304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.949328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.949513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.949694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.949718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.949902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.950092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.950117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.950295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.950470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.950494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.950722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.950889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.950915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.951098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.951273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.951297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.951450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.951655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.951680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.951878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.952027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.952051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.952208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.952391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.952415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.952589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.952766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.952789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.952967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.953147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.953172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.953330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.953512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.953539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.953693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.953846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.953878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.954086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.954256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.954281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.954463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.954672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.954697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.954877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.955028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.955054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.955260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.955416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.955441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.955597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.955744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.955769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.955940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.956097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.956121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.956269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.956441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.956467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.956672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.956843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.956906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.957121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.957308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.957332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.957477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.957685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.957710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.957886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.958060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.958084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.958272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.958451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.958484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.958646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.958851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.958882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.959068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.959252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.959278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.959460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.959640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.042 [2024-07-14 04:02:23.959664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.042 qpair failed and we were unable to recover it. 00:30:05.042 [2024-07-14 04:02:23.959840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.043 [2024-07-14 04:02:23.960029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.043 [2024-07-14 04:02:23.960056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.043 qpair failed and we were unable to recover it. 00:30:05.043 [2024-07-14 04:02:23.960242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.043 [2024-07-14 04:02:23.960452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.043 [2024-07-14 04:02:23.960476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.043 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.960654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.960833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.960859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.961047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.961233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.961258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.961409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.961622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.961647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.961827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.962003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.962028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.962175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.962355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.962380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.962585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.962732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.962757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.962938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.963119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.963143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.963345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.963505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.963529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.963709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.963890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.963914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.964095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.964252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.964279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.964434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.964638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.964662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.964840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.965026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.316 [2024-07-14 04:02:23.965052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.316 qpair failed and we were unable to recover it. 00:30:05.316 [2024-07-14 04:02:23.965233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.965409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.965434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.965613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.965791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.965816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.965993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.966172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.966196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.966400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.966550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.966574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.966781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.966958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.966983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.967165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.967320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.967348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.967554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.967726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.967750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.967928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.968109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.968134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.968289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.968471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.968495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.968674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.968877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.968904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.969065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.969243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.969267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.969468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.969672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.969697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.969847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.970063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.970089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.970302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.970506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.970530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.970711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.970916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.970942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.971084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.971269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.971293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.971496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.971701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.971726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.971937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.972122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.972152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.972328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.972482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.972509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.972662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.972834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.972860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.973072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.973242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.973267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.973419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.973578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.973603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.973756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.973931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.973957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.974132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.974307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.974331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.974511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.974716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.974741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.974922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.975097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.975121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.975303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.975451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.975477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.975633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.975809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.975839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.976059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.976207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.976232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.976410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.976561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.976585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.976766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.976917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.976943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.977094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.977271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.977296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.977495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.977666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.977689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.977844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.978005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.978030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.978208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.978360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.978385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.978535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.978712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.978738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.978921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.979072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.979097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.979250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.979402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.979431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.979582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.979757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.979781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.979932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.980086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.980111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.980264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.980436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.980461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.980611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.980759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.980785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.980937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.981114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.981139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.981286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.981466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.981491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.981644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.981804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.981828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.982015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.982192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.982216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.982371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.982540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.982565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.982744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.982953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.982982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.983132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.983316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.983340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.983520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.983694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.983719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.983899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.984088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.984115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.984284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.984428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.984452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.984612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.984747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.317 [2024-07-14 04:02:23.984771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.984795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.984886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.317 [2024-07-14 04:02:23.984906] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.317 [2024-07-14 04:02:23.984918] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.317 [2024-07-14 04:02:23.984945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.984987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.317 [2024-07-14 04:02:23.985104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.985029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.317 [2024-07-14 04:02:23.985128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.985131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.317 [2024-07-14 04:02:23.985135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.317 [2024-07-14 04:02:23.985308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.985475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.985499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.985681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.985829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.985858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.986049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.986227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.986251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.317 [2024-07-14 04:02:23.986440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.986610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.317 [2024-07-14 04:02:23.986634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.317 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.986798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.986963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.986988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.987165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.987314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.987337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.987519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.987674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.987699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.987870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.988013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.988038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.988196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.988373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.988397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.988579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.988723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.988748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.988927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.989107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.989132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.989285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.989442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.989471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.989651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.989795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.989820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.989980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.990283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.990308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.990482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.990627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.990654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.990829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.991010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.991037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.991190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.991330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.991354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.991556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.991702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.991727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.991888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.992044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.992069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.992223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.992400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.992425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.992605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.992782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.992807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.992974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.993132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.993156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.993368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.993516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.993540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.993686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.993891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.993917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.994182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.994323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.994348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.994532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.994676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.994702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.994884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.995065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.995090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.995237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.995385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.995409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.995572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.995770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.995794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.995950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.996125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.996150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.996290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.996440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.996465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.996638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.996787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.996812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.996979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.997123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.997148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.997328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.997533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.997557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.997698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.997835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.997859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.998039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.998202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.998229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.998400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.998583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.998608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.998750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.998915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.998941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.999142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.999320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.999344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.999520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.999700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:23.999726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:23.999909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.000064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.000091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.000240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.000387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.000412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.000589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.000760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.000784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.000935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.001080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.001104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.001245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.001386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.001411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.001557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.001737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.001762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.001946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.002101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.002126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.002289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.002434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.002459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.002605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.002743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.002768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.002961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.003116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.003140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.003427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.003580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.003604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.003769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.003937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.003962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.004142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.004282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.004307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.004461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.004637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.004662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.004906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.005060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.005086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.005245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.005398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.005422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.005579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.005726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.005751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.005895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.006076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.006101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.006262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.006407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.318 [2024-07-14 04:02:24.006430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.318 qpair failed and we were unable to recover it. 00:30:05.318 [2024-07-14 04:02:24.006618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.006763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.006788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.006952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.007104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.007128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.007312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.007509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.007533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.007709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.007923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.007949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.008107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.008274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.008299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.008479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.008669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.008692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.008852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.009018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.009043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.009200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.009352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.009377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.009530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.009677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.009703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.009960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.010110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.010134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.010314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.010464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.010487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.010664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.010808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.010832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.010986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.011123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.011147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.011332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.011509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.011532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.011839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.012021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.012047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.012234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.012410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.012434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.012607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.012758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.012783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.012943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.013139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.013168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.013350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.013499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.013522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.013667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.013945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.013971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.014142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.014433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.014458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.014642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.014820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.014843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.015035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.015188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.015212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.015453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.015633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.015657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.015813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.015965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.015990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.016168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.016440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.016466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.016632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.016822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.016846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.017035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.017220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.017245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.017398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.017549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.017573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.017738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.017914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.017940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.018086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.018246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.018270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.018420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.018581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.018607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.018769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.018919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.018944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.019102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.019257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.019282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.019457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.019611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.019636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.019791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.019956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.019980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.020128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.020303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.020327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.020479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.020645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.020669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.020849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.021011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.021037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.021193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.021377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.021402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.021583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.021735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.021759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.021909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.022064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.022087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.022245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.022390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.022415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.022586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.022742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.022766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.022913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.023060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.023084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.023235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.023385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.023409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.023586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.023727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.023752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.023951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.024093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.024118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.024285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.024427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.024452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.024635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.024777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.024802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.319 qpair failed and we were unable to recover it. 00:30:05.319 [2024-07-14 04:02:24.024993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.025147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.319 [2024-07-14 04:02:24.025171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.025319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.025493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.025517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.025660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.025815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.025839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.026034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.026216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.026240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.026416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.026573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.026600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.026754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.026914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.026939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.027093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.027243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.027269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.027450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.027607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.027631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.027810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.027987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.028011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.028223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.028395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.028420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.028572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.028726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.028750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.028932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.029112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.029137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.029288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.029447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.029471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.029751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.029914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.029939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.030104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.030262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.030286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.030486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.030630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.030654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.030805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.030956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.030982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.031128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.031318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.031344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.031510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.031673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.031698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.031864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.032025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.032051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.032254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.032402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.032426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.032570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.032746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.032770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.032928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.033074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.033099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.033294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.033478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.033504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.033654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.033801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.033826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.033989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.034193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.034218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.034367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.034519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.034543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.034702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.034855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.034889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.035095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.035259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.035284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.035428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.035606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.035630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.035782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.035956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.035999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.036154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.036304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.036329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.036476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.036620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.036646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.036838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.037006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.037035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.037206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.037413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.037438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.037592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.037775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.037799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.037995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.038140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.038171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.038339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.038519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.038546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.038730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.038900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.038925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.039069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.039224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.039249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.039426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.039570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.039594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.039735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.039887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.039914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.040073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.040217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.040242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.040385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.040564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.040593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.040774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.040948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.040973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.041143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.041315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.041340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.041489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.041640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.041665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.041812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.041960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.041985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.042148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.042319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.042344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.042516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.042686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.042710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.042864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.043022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.043046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.043213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.043377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.043401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.043553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.043732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.043756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.043991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.044172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.044200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.044377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.044519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.044545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.044695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.044845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.044885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.045062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.045211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.045237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.045388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.045537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.045561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.320 qpair failed and we were unable to recover it. 00:30:05.320 [2024-07-14 04:02:24.045718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.045916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.320 [2024-07-14 04:02:24.045942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.046096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.046237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.046261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.046436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.046579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.046603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.046752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.046997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.047023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.047229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.047378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.047402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.047550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.047753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.047778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.047926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.048078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.048104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.048255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.048417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.048440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.048650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.048820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.048845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.049005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.049184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.049208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.049376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.049521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.049545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.049698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.049875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.049901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.050048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.050219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.050244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.050417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.050581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.050605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.050753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.050906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.050932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.051109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.051258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.051282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.051436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.051613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.051637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.051787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.051964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.051989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.052160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.052333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.052358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.052510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.052684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.052708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.052890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.053034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.053059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.053214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.053391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.053415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.053621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.053787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.053811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.053976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.054153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.054177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.054340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.054488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.054513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.054660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.054807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.054831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.055004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.055185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.055208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.055365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.055543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.055567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.055706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.055905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.055931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.056081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.056230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.056255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.056434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.056612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.056637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.056816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.056989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.057014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.057173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.057352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.057377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.057549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.057718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.057741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.057904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.058062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.058086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.058266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.058412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.058436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.058582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.058730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.058753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.058906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.059051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.059075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.059227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.059410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.059434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.059624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.059766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.059791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.059953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.060113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.060137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.060332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.060508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.060533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.060691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.060842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.060873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.061022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.061212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.061237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.061390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.061561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.061586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.061738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.061889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.061914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.062077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.062392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.062416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.062566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.062703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.062727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.062904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.063069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.063095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.063275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.063430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.063455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.063639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.063816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.063841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.064004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.064150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.064174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.064356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.064502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.064527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.064721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.064913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.064938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.065113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.065286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.065311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.065494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.065661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.065686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.321 qpair failed and we were unable to recover it. 00:30:05.321 [2024-07-14 04:02:24.065849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.066037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.321 [2024-07-14 04:02:24.066062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.066213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.066391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.066414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.066566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.066730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.066754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.066947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.067099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.067125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.067285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.067480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.067504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.067711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.067861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.067915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.068097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.068245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.068270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.068417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.068597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.068621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.068791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.068970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.068995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.069138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.069286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.069309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.069471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.069642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.069667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.069858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.070042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.070066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.070232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.070391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.070415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.070584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.070761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.070785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.070938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.071098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.071122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.071303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.071456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.071483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.071652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.071813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.071837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.072048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.072196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.072220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.072367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.072515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.072539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.072698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.072861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.072910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.073128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.073276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.073301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.073478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.073620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.073644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.073821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.074008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.074033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.074189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.074348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.074372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.074525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.074669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.074693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.074874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.075018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.075042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.075217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.075387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.075411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.075584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.075752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.075776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.075930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.076114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.076138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.076316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.076492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.076516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.076684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.076830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.076855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.077027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.077229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.077253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.077437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.077636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.077660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.077853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.078030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.078055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.078202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.078382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.078407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.078588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.078735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.078761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.078915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.079068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.079093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.079239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.079431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.079455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.079651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.079858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.079891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.080058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.080202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.080228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.080384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.080570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.080596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.080767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.080918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.080944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.081144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.081293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.081317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.081459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.081636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.081661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.081819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.081971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.081997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.082145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.082315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.082340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.082485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.082658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.082682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.082823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.082995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.083020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.083168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.083348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.083372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.083550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.083690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.083714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.083857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.084018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.084043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.084222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.084372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.084396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.084562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.084713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.084737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.084891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.085078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.085103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.085268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.085447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.085472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.085617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.085794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.085820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.085978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.086124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.086148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.086329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.086510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.086534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.086688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.086835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.086859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.087013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.087190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.087215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.087392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.087581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.087605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.087770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.087917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.087942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.088122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.088296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.088321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.088502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.088667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.088692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.322 qpair failed and we were unable to recover it. 00:30:05.322 [2024-07-14 04:02:24.088896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.089052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.322 [2024-07-14 04:02:24.089078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.089226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.089395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.089419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.089563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.089767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.089792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.089964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.090120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.090144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.090316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.090467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.090491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.090642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.090816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.090841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.091000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.091160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.091189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.091359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.091504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.091528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.091673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.091809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.091832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.092065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.092209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.092234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.092415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.092562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.092586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.092735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.092937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.092962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.093117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.093295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.093319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.093467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.093660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.093685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.093835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.093989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.094016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.094158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.094299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.094323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.094469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.094608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.094637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.094801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.094954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.094979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.095127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.095274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.095299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.095464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.095620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.095644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.095809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.095970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.095994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.096156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.096348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.096372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.096524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.096675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.096699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.096852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.097040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.097064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.097238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.097387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.097411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.097560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.097732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.097756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.097911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.098056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.098085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.098260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.098423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.098447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.098600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.098762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.098787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.098959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.099133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.099157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.099328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.099480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.099504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.099655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.099812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.099836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.100047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.100208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.100232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.100382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.100558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.100583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.100725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.100896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.100922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.101064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.101236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.101260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.101414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.101570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.101599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.101745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.101915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.101941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.102093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.102280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.102304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.102478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.102658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.102683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.102857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.103019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.103043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.103191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.103335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.103359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.103510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.103650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.103674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.103826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.104019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.104044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.104247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.104415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.104438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.104620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.104767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.104793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.104943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.105086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.105111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.105314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.105484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.105509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.105674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.105825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.105850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.106066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.106243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.106268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.106420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.106596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.106621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.106802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.106956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.106981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.107123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.107282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.107306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.107496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.107646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.107670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.107814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.107986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.108010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.108182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.108376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.108401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.108553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.108702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.323 [2024-07-14 04:02:24.108727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.323 qpair failed and we were unable to recover it. 00:30:05.323 [2024-07-14 04:02:24.108880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.109045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.109070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.109227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.109431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.109456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.109598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.109750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.109775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.109925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.110064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.110088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.110230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.110374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.110399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.110577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.110759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.110785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.110936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.111108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.111132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.111293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.111450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.111475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.111627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.111829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.111853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.112054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.112200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.112224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.112377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.112580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.112605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.112784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.112934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.112960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.113103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.113245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.113270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.113413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.113602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.113627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.113806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.113973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.113999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.114150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.114298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.114323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.114469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.114627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.114651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.114802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.114957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.114982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.115128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.115278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.115305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.115484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.115647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.115671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.115852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.116004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.116028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.116199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.116353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.116378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.116528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.116688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.116711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.116871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.117018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.117043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.117216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.117365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.117390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.117564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.117764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.117789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.117963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.118138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.118163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.118318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.118467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.118492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.118635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.118810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.118834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.118997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.119146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.119170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.119316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.119470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.119494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.119664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.119825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.119849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.120012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.120183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.120208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.120382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.120531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.120557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.120751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.120896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.120922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.121099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.121242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.121269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.121443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.121623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.121647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.121825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.122005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.122031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.122188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.122362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.122387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.122538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.122683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.122706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.122890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.123048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.123072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.123218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.123372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.123396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.123543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.123718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.123742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.123896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.124038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.124062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.124236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.124386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.124411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.124561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.124735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.124760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.124964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.125106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.125131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.125300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.125492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.125517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.125674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.125851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.125883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.126055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.126231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.126256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.126436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.126584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.126607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.126769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.126924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.126949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.127122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.127264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.127288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.127465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.127611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.127635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.127784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.127936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.127961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.324 qpair failed and we were unable to recover it. 00:30:05.324 [2024-07-14 04:02:24.128114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.128291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.324 [2024-07-14 04:02:24.128315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.128484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.128632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.128656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.128862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.129015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.129039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.129214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.129363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.129387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.129560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.129738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.129762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.129915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.130093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.130117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.130290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.130458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.130482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.130652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.130795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.130819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.130991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.131136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.131161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.131332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.131473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.131498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.131640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.131813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.131836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.131994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.132156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.132181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.132360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.132520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.132544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.132721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.132908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.132933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.133098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.133242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.133267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.133437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.133590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.133614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.133765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.133940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.133965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.134121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.134311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.134336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.134486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.134689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.134714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.134859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.135043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.135068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.135223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.135373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.135398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.135562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.135705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.135731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.135889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.136065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.136090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.136241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.136417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.136442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.136589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.136738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.136762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.136944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.137095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.137120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.137290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.137456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.137482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.137646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.137787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.137812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.137971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.138147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.138172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.138316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.138485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.138509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.138656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.138802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.138827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.139007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.139204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.139228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.139381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.139529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.139554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.139697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.139860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.139891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.140091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.140263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.140288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.140463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.140618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.140643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.140795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.140946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.140971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.141114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.141273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.141298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.141453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.141643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.141667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.141818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.141972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.141998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.142157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.142330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.142355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.142535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.142678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.142702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.142876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.143042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.143067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.143217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.143368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.143393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.143556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.143702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.143726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.143876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.144054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.144078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.144243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.144384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.144409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.144574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.144718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.144744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.144890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.145054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.145078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.145232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.145376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.145401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.145544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.145688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.145712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.145853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.146042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.146067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.325 qpair failed and we were unable to recover it. 00:30:05.325 [2024-07-14 04:02:24.146270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.146420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.325 [2024-07-14 04:02:24.146445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.146582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.146755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.146778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.146929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.147085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.147109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.147286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.147428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.147457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.147613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.147785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.147809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.147964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.148117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.148141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.148285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.148436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.148463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.148613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.148784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.148809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.148956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.149140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.149165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.149371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.149579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.149603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.149769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.149939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.149963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.150110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.150259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.150283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.150457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.150602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.150626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.150776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.150929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.150959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.151119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.151293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.151318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.151488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.151653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.151677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.151833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.151991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.152015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.152209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.152364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.152388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.152543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.152696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.152720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.152873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.153027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.153051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.153228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.153378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.153402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.153545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.153691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.153716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.153871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.154029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.154053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.154238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.154384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.154414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.154553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.154699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.154725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.154880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.155026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.155049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.155224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.155368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.155392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.155571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.155727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.155752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.155901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.156049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.156073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.156244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.156399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.156425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.156605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.156798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.156823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.156977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.157157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.157181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.157353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.157494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.157518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.157721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.157895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.157924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.158067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.158249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.158273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.158436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.158577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.158602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.158749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.158921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.158945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.159096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.159267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.159291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.159444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.159596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.159620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.159767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.159959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.159983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.160146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.160302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.160326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.160495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.160674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.160698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.160860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.161011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.161035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.161242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.161384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.161409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.161572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.161724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.161748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.161943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.162089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.162114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.162270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.162417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.162441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.162614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.162775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.162799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.162968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.163111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.163135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.163303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.163454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.163480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.163645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.163821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.163845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.164030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.164175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.164200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.164348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.164531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.164557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.164700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.164845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.164876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.165080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.165251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.165275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.165418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.165598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.165623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.165774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.165944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.165969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.166122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.166264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.166289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.166468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.166631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.166656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.166813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.166960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.166985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.167136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.167284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.167308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.167456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.167628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.167652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.326 qpair failed and we were unable to recover it. 00:30:05.326 [2024-07-14 04:02:24.167824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.326 [2024-07-14 04:02:24.168001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.168026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.168214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.168384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.168409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.168556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.168714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.168739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.168903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.169073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.169097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.169240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.169386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.169410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.169551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.169719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.169743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.169908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.170065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.170090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.170238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.170410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.170435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.170579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.170772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.170797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.170950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.171117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.171142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.171299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.171470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.171495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.171646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.171798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.171822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.172018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.172171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.172195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.172341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.172485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.172509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.172683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.172837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.172860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.173016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.173195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.173219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.173390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.173538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.173562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.173723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.173901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.173927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.174073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.174227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.174251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.174395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.174541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.174566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.174747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.174895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.174926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.175102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.175249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.175274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.175469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.175625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.175650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.175854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.176032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.176055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.176196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.176342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.176366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.176533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.176725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.176750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.176921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.177097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.177122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.177304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.177471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.177495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.177656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.177854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.177884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.178051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.178229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.178253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.178416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.178567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.178592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.178733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.178888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.178914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.179099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.179275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.179301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.179470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.179675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.179699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.179852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.180032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.180057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.180223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.180404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.180429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.180591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.180764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.180789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.180942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.181104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.181129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.181320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.181486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.181510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.181663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.181805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.181829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.182002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.182148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.182172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.182342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.182486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.182510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.182667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.182823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.182848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.183043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.183244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.183268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.183418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.183590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.183615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.183765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.183953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.183978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.184127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.184280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.184304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.184485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.184661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.184685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.184841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.185028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.185053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.185193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.185333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.185357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.185530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.185699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.185723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.185872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.186047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.186072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.186262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.186424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.186448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.186598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.186780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.186805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.327 qpair failed and we were unable to recover it. 00:30:05.327 [2024-07-14 04:02:24.186966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.187129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.327 [2024-07-14 04:02:24.187154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.187328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.187469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.187493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.187659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.187820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.187845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.188028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.188190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.188214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.188359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.188552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.188576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.188741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.188918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.188943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.189087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.189242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.189269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.189475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.189620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.189644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.189820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.189977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.190002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.190176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.190349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.190374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.190531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.190678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.190705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.190857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.191021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.191047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.191208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.191358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.191382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.191553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.191692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.191716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.191903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.192057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.192081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.192258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.192415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.192440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.192597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.192769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.192793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.192940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.193112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.193136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.193291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.193446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.193471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.193622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.193834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.193859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.194036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.194184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.194207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.194365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.194510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.194536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.194717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.194887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.194912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.195087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.195235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.195259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.195402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.195570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.195595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.195766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.195915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.195940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.196088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.196246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.196269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.196448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.196590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.196614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.196766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.196946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.196970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.197129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.197317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.197342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.197492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.197664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.197689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.197860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.198013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.198038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.198212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.198412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.198436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.198607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.198777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.198800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.198972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.199119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.199145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.199301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.199452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.199477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.199652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.199798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.199822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.200002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.200154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.200179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.200356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.200519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.200544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.200683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.200885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.200915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.201064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.201212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.201237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.201386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.201557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.201583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.201753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.201916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.201941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.202082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.202232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.202257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.202428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.202592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.202617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.202788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.202933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.202958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.328 [2024-07-14 04:02:24.203128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.203274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.328 [2024-07-14 04:02:24.203298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.328 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.203477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.203672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.203697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.203876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.204034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.204063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.204216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.204363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.204387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.204533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.204691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.204716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.204908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.205084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.205110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.205256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.205429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.205454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.205625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.205798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.205823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.205979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.206161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.206186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.206362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.206505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.206529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.206701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.206871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.206896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.207096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.207243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.207267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.207440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.207590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.207618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.207764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.207937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.207963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.208141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.208294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.208319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.208462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.208604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.208628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.208799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.208974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.209000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.209173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.209342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.209367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.209559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.209703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.209728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.209895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.210050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.210075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.210254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.210403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.210427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.210580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.210723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.210747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.210941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.211107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.211136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.211282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.211429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.211453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.211645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.211790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.211816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.211965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.212140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.212165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.212327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.212500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.212524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.212666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.212811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.212835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.213017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.213169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.213193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.213369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.213535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.213559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.213698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.213880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.213905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.214079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.214247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.214271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.214430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.214608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.214632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.214783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.214950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.214976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.215122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.215271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.215296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.215445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.215604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.215627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.215780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.215959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.215985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.216134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.216344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.216368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.216508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.216652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.216677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.216827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.217169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.217492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.217805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.217994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.218153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.218292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.218317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.218490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.218657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.218681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.218821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.218983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.219007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.219156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.219330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.219355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.219498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.219665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.219689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.329 qpair failed and we were unable to recover it. 00:30:05.329 [2024-07-14 04:02:24.219856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.329 [2024-07-14 04:02:24.220037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.220061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.220235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.220394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.220418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.220563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.220708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.220733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.220908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.221048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.221072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.221223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.221377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.221404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.221568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.221718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.221743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.221925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.222098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.222122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.222266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.222411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.222435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.222580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.222733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.222758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.222955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.223135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.223161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.223325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.223477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.223502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.223705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.223851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.223882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.224042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.224204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.224229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.224381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.224553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.224578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.224731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.224905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.224931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.225088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.225278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.225302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.225481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.225633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.225659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.225830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.225984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.226009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.226158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.226332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.226357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.226499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.226640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.226665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.226849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.227038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.227063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.227213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.227381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.227406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.227552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.227694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.227719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.227906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.228068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.228092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.228243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.228399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.228424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.228582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.228765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.228790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.228984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.229163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.229187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.229353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.229497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.229521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.229688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.229870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.229896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.230076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.230239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.230263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.230423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.230600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.230625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.230796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.230970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.230995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.231148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.231295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.231320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.231476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.231651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.231676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.231828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.231999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.232024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.232211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.232354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.232378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.232574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.232732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.232756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.232920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.233094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.233119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.233301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.233454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.233480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.233677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.233835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.233859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.234009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.234158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.234182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.234330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.234500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.234524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.234686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.234876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.234900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.235048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.235231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.235256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.330 qpair failed and we were unable to recover it. 00:30:05.330 [2024-07-14 04:02:24.235400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.330 [2024-07-14 04:02:24.235536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.235560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.235738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.235917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.235942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.236082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.236227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.236252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.236391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.236562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.236586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.236742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.236922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.236947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.237125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.237285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.237309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.237469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.237615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.237639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.237805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.237981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.238007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.238167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.238316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.238341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.238493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.238676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.238701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.238914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.239060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.239085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.239276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.239424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.239448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.239625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.239772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.239795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.239969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.240145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.240171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.240329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.240519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.240543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.240692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.240884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.240909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.331 [2024-07-14 04:02:24.241088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.241233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.331 [2024-07-14 04:02:24.241258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.331 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.241419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.241563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.241587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.241743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.241901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.241926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.242070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.242217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.242242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.242424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.242577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.242603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.242765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.242919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.242945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.243095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.243269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.243296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.243475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.243616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.243640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.243792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.243938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.243970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.244163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.244306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.244330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.244496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.244704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.244728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.244880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.245023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.245047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.245245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.245389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.245414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.245609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.245748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.245772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.245956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.246135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.246159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.246330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.246487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.246511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.246658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.246847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.246888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.247073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.247232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.247255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.247407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.247553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.247577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.247726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.247907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.247932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.248103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.248267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.248291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.248440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.248602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.248625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.248800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.248939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.248965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.249111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.249309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.249334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.249511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.249681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.249705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.249853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.250016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.250041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.250184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.250360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.250385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.250560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.250700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.250724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.250900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.251041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.251066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.251213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.251377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.251401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.251594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.251733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.251757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.251926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.252072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.252096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.252249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.252423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.252447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.252621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.252761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.252784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.252948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.253094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.253119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.253265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.253413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.253436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.253608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.253753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.253776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.253971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.254114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.254139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.254312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.254462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.254486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.254628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.254829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.254853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.255029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.255183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.255210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.255357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.255504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.255528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.255671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.255881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.255907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.256078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.256237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.256262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.256403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.256566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.256591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.256773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.256943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.256972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.257138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.257313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.257338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.257476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.257647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.257671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.257849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.258026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.258051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.258204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.258350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.258374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.618 [2024-07-14 04:02:24.258517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.258689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.618 [2024-07-14 04:02:24.258713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.618 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.258856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.259022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.259047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.259216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.259363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.259387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.259560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.259714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.259738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.259884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.260039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.260063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.260223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.260395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.260422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.260568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.260714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.260739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.260881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.261030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.261053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.261205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.261372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.261397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.261591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.261743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.261766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.261961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.262117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.262141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.262340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.262545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.262569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.262716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.262863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.262894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.263072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.263217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.263242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.263383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.263582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.263606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.263753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.263900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.263929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.264107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.264267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.264293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.264458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.264644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.264669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.264846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.265019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.265045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.265195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.265358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.265384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.265551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.265727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.265751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.265928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.266102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.266126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.266280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.266422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.266447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.266591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.266764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.266789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.266936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.267089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.267113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.267288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.267478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.267507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.267680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.267832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.267856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.268023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.268180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.268206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.268350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.268486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.268511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.268686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.268827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.268852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.269009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.269177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.269201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.269353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.269517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.269542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.269682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.269854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.269887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.270036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.270184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.270209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.270348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.270523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.270550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.270739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.270888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.270923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.271101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.271261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.271285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.271471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.271622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.271648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.271792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.271936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.271961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.272134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.272313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.272338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.272531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.272669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.272693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.272872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.273042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.273066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.273228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.273376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.273400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.273562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.273712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.273738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.273913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.274067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.274092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.274286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.274489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.274513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.274694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.274836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.274860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.275043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.275193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.275217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.275376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.275518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.275543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.275722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.275899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.275924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.276071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.276216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.276240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.276386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.276565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.276590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.276746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.276897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.276926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.277107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.277255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.277278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.277431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.277601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.277625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.277776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.277969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.277999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.278156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.278329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.278352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.619 [2024-07-14 04:02:24.278496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.278678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.619 [2024-07-14 04:02:24.278703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.619 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.278846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.279002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.279030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.279179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.279336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.279360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.279515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.279656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.279679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.279832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.279997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.280024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.280198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.280338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.280362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.280505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.280678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.280701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.280856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.281050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.281074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.281225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.281412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.281437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.281614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.281788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.281812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.281979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.282120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.282145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.282335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.282510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.282533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.282724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.282878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.282904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.283085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.283232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.283255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.283434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.283585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.283610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.283757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.283943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.283969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.284140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.284336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.284360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.284530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.284687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.284711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.284888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.285064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.285087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.285249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.285420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.285444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.285587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.285734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.285758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.285910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.286087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.286111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.286264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.286418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.286442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.286636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.286804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.286828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.286989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.287131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.287154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.287317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.287494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.287518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.287669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.287822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.287847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.288023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.288198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.288221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.288402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.288544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.288568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.288764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.288942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.288967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.289111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.289280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.289304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.289473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.289647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.289671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.289844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.290025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.290049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.290220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.290423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.290448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.290594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.290766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.290792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.290939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.291091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.291116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.291266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.291469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.291495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.291672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.291812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.291836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.291990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.292149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.292174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.292334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.292487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.292510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.292655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.292850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.292880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.293058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.293200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.293225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.293421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.293572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.293598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.293769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.293918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.293942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.294114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.294287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.294311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.294452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.294626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.294650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.294791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.294938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.294963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.295142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.295292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.295316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.295490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.295624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.295648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.295803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.295961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.295986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.296136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.296283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.296308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.296469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.296619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.296642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.620 qpair failed and we were unable to recover it. 00:30:05.620 [2024-07-14 04:02:24.296786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.620 [2024-07-14 04:02:24.296930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.296956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.297150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.297297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.297320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.297470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.297616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.297641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.297843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.298003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.298028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.298206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.298358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.298381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.298543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.298723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.298747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.298927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.299075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.299099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.299301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.299498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.299522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.299677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.299882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.299908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.300071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.300213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.300237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.300418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.300567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.300590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.300736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.300883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.300908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.301075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.301254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.301278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.301416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.301585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.301608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.301768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.301914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.301939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.302131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.302272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.302295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.302446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.302633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.302657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.302808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.302968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.302995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.303180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.303339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.303362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.303536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.303689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.303714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.303858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.304014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.304040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.304184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.304357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.304381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.304544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.304686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.304710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.304855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.305054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.305078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.305225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.305399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.305424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.305598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.305750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.305775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.305956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.306140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.306165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.306336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.306516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.306540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.306704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.306894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.306920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.307074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.307221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.307246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.307421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.307586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.307610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.307783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.307933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.307958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.308103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.308304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.308328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.308475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.308655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.308679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.308824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.309008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.309034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.309182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.309350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.309373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.309525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.309671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.309694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.309854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.310021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.310046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.310222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.310420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.310444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.310585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.310744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.310769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.310947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.311093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.311118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.311268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.311413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.311436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.311608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.311778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.311802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.311974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.312155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.312179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.312349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.312498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.312523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.312692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.312874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.312899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.313058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.313196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.313220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.313372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.313550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.313579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.313721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.313916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.313941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.314121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.314281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.314304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.314484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.314623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.314648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.314806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.314958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.314984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.315135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.315313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.315336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.315537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.315712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.315736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.315882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.316032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.316058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.316221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.316368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.316394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.316567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.316713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.316737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.316888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.317070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.317098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.621 qpair failed and we were unable to recover it. 00:30:05.621 [2024-07-14 04:02:24.317278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.621 [2024-07-14 04:02:24.317479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.317504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.317647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.317818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.317842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.318013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.318172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.318195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.318368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.318522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.318547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.318697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.318839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.318881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.319044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.319213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.319237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.319384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.319573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.319597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.319777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.319964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.319989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.320146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.320305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.320329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.320490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.320662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.320690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.320842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.321050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.321075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.321228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.321373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.321397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.321548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.321708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.321732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.321883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.322023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.322047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.322216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.322385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.322409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.322552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.322703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.322729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.322887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.323053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.323077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.323273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.323447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.323472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.323637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.323799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.323824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.323979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.324151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.324180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.324335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.324488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.324515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.324673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.324874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.324899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.325049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.325197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.325222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.325366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.325534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.325558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.325710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.325897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.325923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.326125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.326289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.326313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.326483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.326650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.326674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.326826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.327017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.327043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.327221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.327397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.327421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.327568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.327745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.327769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.327928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.328073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.328098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.328270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.328410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.328435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.328606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.328748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.328771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.328925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.329067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.329092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.329246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.329425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.329449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.329593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.329737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.329762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.329936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.330109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.330133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.330289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.330462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.330486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.330634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.330794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.330818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.330964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.331140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.331165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.331309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.331450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.331474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.331621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.331793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.331818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.331985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.332128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.332153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.332317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.332490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.332514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.332656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.332823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.332847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.333006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.333166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.333190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.333365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.333566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.333590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.333735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.333898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.333924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.334090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.334261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.334285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.334438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.334586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.334611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.622 qpair failed and we were unable to recover it. 00:30:05.622 [2024-07-14 04:02:24.334763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.622 [2024-07-14 04:02:24.334935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.334959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.335114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.335276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.335300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.335460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.335637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.335662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.335813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.335968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.335993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.336140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.336325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.336349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.336511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.336663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.336690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.336839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.337016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.337041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.337190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.337340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.337366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.337573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.337733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.337756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.337940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.338084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.338109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.338267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.338415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.338440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.338589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.338761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.338784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.338935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.339109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.339133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.339285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.339463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.339488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.339637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.339830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.339855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.340021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.340168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.340192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.340371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.340530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.340553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.340733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.340896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.340922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.341114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.341264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.341288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.341463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.341613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.341638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.341835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.341991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.342018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.342166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.342315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.342339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.342517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.342661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.342688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.342860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.343014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.343038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.343185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.343333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.343358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.343532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.343684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.343708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.343888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.344049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.344073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.344246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.344400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.344425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.344569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.344714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.344739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.344898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.345049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.345074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.345222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.345364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.345389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.345565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.345717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.345741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.345884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.346049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.346074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.346247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.346439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.346463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.346625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.346762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.346787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.346953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.347138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.347162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.347340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.347497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.347522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.347694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.347854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.347901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.348070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.348235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.348259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.348438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.348583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.348607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.348807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.348956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.348981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.349125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.349302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.349327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.349500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.349646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.349670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.349813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.349958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.349982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.350155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.350327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.350351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.350496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.350767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.350791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.350968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.351109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.351133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.351307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.351481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.351506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.351651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.351798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.351824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.352006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.352146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.352170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.352323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.352522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.352546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.352713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.352859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.352903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.623 qpair failed and we were unable to recover it. 00:30:05.623 [2024-07-14 04:02:24.353056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.623 [2024-07-14 04:02:24.353237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.353260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.353422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.353594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.353618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.353782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.353965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.353990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.354144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.354324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.354350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.354516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.354680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.354704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.354885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.355058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.355082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.355262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.355423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.355447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.355629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.355801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.355824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.355978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.356134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.356159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.356302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.356572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.356596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.356767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.356939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.356963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.357108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.357271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.357296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.357438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.357631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.357655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.357819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.357990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.358014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.358186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.358334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.358359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.358540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.358710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.358734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.358885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.359058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.359083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.359227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.359372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.359396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.359572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.359751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.359775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.359954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.360136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.360161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.360311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.360448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.360472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.360617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.360781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.360804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.360947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.361123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.361147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.361301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.361565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.361590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.361762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.361925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.361950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.362121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.362260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.362284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.362458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.362599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.362623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.362822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.362991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.363015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.363171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.363353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.363378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.363553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.363730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.363754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.363912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.364059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.364083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.364238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.364416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.364440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.364608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.364781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.364804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.364982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.365130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.365155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.365327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.365500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.365524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.365684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.365853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.365885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.366049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.366188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.366213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.366396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.366536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.366561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.366724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.366901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.366927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.367089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.367249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.367273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.367437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.367606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.367631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.367812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.367955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.367980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.368128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.368317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.368342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.368492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.368668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.368692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.368883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.369027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.369052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.369216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.369397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.369423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.369576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.369738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.369764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.369920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.370100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.370127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.370292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.370434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.370463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.370609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.370750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.370774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.370946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.371117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.371141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.371294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.371479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.371504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.371650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.371840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.371871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.372024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.372169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.372194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.372375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.372556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.372580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.372727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.372877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.372903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.373077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.373228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.624 [2024-07-14 04:02:24.373254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.624 qpair failed and we were unable to recover it. 00:30:05.624 [2024-07-14 04:02:24.373446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.373593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.373617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.373769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.373935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.373965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.374121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.374293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.374318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.374476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.374635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.374659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.374813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.374963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.374988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.375137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.375306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.375331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.375471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.375650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.375675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.375819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.375992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.376018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.376158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.376332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.376356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.376528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.376675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.376700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.376874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.377027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.377053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.377239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.377396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.377427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.377580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.377742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.377765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.377950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.378110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.378134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.378302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.378451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.378475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.378630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.378769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.378793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.378945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.379133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.379158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.379310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.379482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.379506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.379666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.379839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.379864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.380030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.380174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.380198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.380349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.380515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.380538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.380680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.380821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.380844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.381060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.381217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.381242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.381399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.381587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.381611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.381755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.381909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.381933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.382091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.382291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.382316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.382490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.382675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.382700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.382904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.383055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.383079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.383234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.383384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.383408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.383583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.383731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.383755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.383934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.384084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.384108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.384261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.384425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.384449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.384600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.384758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.384783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.384930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.385124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.385147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.385312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.385468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.385492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.385638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.385780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.385804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.385950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.386100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.386124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.386305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.386471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.386496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.386692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.386858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.386899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.387080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.387334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.387359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.387538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.387684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.387708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.387872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.388019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.388044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.388192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.388353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.388377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.388552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.388689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.388712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.388902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.389096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.389121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.389275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.389450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.389473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.389641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.389785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.389808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.389978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.390123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.390149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.390306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.390447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.390471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.390641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.390792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.390816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.390994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.391156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.391180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.625 [2024-07-14 04:02:24.391324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.391465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.625 [2024-07-14 04:02:24.391489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.625 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.391668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.391810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.391833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.392017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.392185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.392210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.392391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.392623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.392647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.392827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.393004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.393028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.393171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.393329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.393354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.393525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.393687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.393713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.393871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.394013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.394037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.394216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.394367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.394391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.394564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.394734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.394758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.394903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.395048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.395074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.395266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.395414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.395439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.395639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.395787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.395812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.395976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.396127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.396151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.396294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.396444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.396469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.396650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.396879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.396905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.397076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.397246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.397270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.397433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.397613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.397637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.397791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.397938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.397963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.398111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.398261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.398284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.398425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.398568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.398591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.398744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.398893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.398918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.399067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.399328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.399352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.399502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.399694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.399718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.399894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.400045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.400070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.400230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.400381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.400408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.400611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.400748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.400771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.400941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.401091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.401115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.401272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.401473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.401497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.401669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.401831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.401855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.402036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.402180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.402203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.402381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.402535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.402559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.402700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.402857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.402901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.403062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.403222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.403246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.403389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.403545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.403570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.403714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.403881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.403906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.404064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.404206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.404229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.404375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.404519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.404542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.404715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.404853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.404883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.405039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.405199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.405222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.405368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.405512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.405537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.405679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.405874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.405901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.406054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.406232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.406259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.406403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.406580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.406603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.406753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.406926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.406951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.407096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.407247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.407273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.407425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.407561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.407585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.407730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.407875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.407901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.408042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.408196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.408221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.408371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.408614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.408638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.408777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.408927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.408952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.409100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.409244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.409269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.409469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.409610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.409634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.409837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.409988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.410013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.410155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.410358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.410382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.410528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.410732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.410756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.410915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.411092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.411117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.411327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.411581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.411606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.626 qpair failed and we were unable to recover it. 00:30:05.626 [2024-07-14 04:02:24.411746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.626 [2024-07-14 04:02:24.411912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.411938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.412130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.412277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.412301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.412515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.412670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.412694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.412857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.413024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.413049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.413241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.413414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.413440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.413602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.413784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.413808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.413950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.414101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.414127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.414269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.414432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.414456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.414621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.414793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.414817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.414996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.415170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.415194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.415343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.415503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.415527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.415678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.415857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.415888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.416075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.416216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.416241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.416387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.416539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.416565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.416715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.416856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.416887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.417051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.417196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.417221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.417376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.417545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.417569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.417743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.417905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.417930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.418107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.418271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.418296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.418465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.418634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.418658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.418825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.418982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.419008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.419174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.419349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.419373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.419554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.419735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.419759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.419932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.420104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.420130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.420306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.420470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.420495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.420636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.420785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.420812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.420987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.421143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.421167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.421315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.421493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.421518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.421673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.421818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.421841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.421995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.422148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.422176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.422333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.422508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.422533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.422703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.422859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.422894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.423040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.423207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.423231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.423389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.423535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.423564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.423717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.423908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.423934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.424111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.424286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.424310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.424470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.424613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.424638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.424788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.424959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.424984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.425158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.425337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.425362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.425537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.425708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.425733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.425882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.426026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.426050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.426191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.426365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.426389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.426571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.426718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.426742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.426936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.427078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.427106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.427300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.427450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.427474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.427612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.427782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.427806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.427976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.428129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.428153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.428322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.428500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.428525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.428676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.428849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.428879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.429046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.429192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.429216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.429370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.429542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.429566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.627 qpair failed and we were unable to recover it. 00:30:05.627 [2024-07-14 04:02:24.429756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.627 [2024-07-14 04:02:24.429901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.429926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.430099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.430263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.430287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.430446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.430624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.430653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.430801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.430973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.430997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.431202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.431376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.431401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.431556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.431695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.431720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.431860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.432039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.432063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.432213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.432357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.432380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.432555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.432689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.432714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.432890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.433058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.433082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.433262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.433441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.433465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.433614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.433752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.433777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.433926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.434097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.434125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.434305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.434451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.434475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.434633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.434778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.434803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.434974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.435137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.435161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.435313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.435465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.435490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.435640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.435810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.435834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.435997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.436145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.436169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.436340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.436518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.436542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.436688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.436831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.436854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.437010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.437161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.437187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.437363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.437517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.437543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.437728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.437887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.437912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.438067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.438213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.438236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.438376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.438516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.438541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.438720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.438857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.438887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.439036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.439182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.439206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.439378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.439521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.439545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.439723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.439888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.439913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.440089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.440258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.440283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.440461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.440664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.440688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.440826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.440975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.441000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.441167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.441314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.441338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.441530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.441709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.441733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.441893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.442069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.442094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.442258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.442428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.442453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.442601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.442771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.442795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.442946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.443131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.443155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.443307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.443447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.443471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.443628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.443802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.443826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.444008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.444152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.444177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.444358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.444509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.444534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.444713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.444907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.444931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.445076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.445226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.445250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.445402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.445571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.445595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.445774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.445921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.445946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.446097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.446258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.446282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.446425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.446575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.446601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.446759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.446899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.446923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.447077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.447249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.447274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.447453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.447617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.447642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.447784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.447975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.448000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.448166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.448314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.448339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.628 qpair failed and we were unable to recover it. 00:30:05.628 [2024-07-14 04:02:24.448478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.628 [2024-07-14 04:02:24.448650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.448674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.448849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.448999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.449023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.449199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.449377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.449401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.449550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.449695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.449719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.449877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.450028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.450052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.450232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.450408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.450432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.450585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.450758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.450782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.450927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.451069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.451093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.451270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.451426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.451450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.451628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.451771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.451796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.451942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.452125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.452150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.452327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.452485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.452510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.452662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.452801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.452824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.453004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.453157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.453181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.453322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.453522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.453547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.453695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.453846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.453875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.454029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.454173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.454197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.454348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.454496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.454522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.454678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.454827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.454851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.455055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.455202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.455229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.455398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.455571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.455595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.455766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.455954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.455979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.456121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.456262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.456287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.456439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.456614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.456637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.456792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.456970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.456995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.457150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.457355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.457381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.457533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.457703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.457726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.457893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.458046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.458070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.458241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.458392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.458418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.458596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.458776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.458801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.458976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.459139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.459163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.459312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.459478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.459502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.459654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.459822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.459846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.460025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.460172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.460197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.460398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.460543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.460567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.460711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.460849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.460881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.461061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.461207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.461231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.461387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.461532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.461558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.461700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.461873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.461897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.462043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.462231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.462255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.462429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.462577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.462603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.462757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.462953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.462978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.463127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.463297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.463321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.463471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.463636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.463660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.463835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.463989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.464015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.464186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.464354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.464379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.464519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.464664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.464688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.464858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.465006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.465031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.465208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.465385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.465409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.465554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.465715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.465739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.465934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.466082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.466106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.629 qpair failed and we were unable to recover it. 00:30:05.629 [2024-07-14 04:02:24.466266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.629 [2024-07-14 04:02:24.466408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.466431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.466579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.466740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.466764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.466931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.467080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.467105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.467288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.467463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.467487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.467651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.467800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.467823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.468006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.468151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.468175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.468351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.468494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.468518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.468695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.468875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.468901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.469078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.469233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.469259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.469404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.469579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.469603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.469752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.469936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.469962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.470116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.470283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.470307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.470458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.470602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.470626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.470771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.470925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.470951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.471147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.471294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.471321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.471494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.471657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.471681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.471824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.471976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.472003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.472163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.472331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.472356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.472534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.472700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.472724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.472897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.473068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.473093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.473245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.473417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.473441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.473612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.473784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.473808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.473979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.474143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.474167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.474341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.474518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.474542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.474688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.474831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.474856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.475016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.475172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.475198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.475346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.475517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.475542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.475696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.475844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.475876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.476029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.476187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.476212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.476381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.476551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.476576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.476775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.476945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.476970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.477118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.477291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.477315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.477464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.477606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.477630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.477781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.477927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.477953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.478101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.478247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.478273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.478436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.478605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.478629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.478778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.478952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.478978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.479127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.479289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.479313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.479460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.479606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.479636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.479783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.479960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.479986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.480158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.480306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.480331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.480514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.480659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.480683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.480864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.481034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.481058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.481226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.481369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.481392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.481576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.481746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.481770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.481938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.482083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.482107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.482314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.482495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.482520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.482665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.482807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.482831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.483010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.483161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.483189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.483333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.483514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.483538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.483692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.483832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.483856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.484016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.484166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.484192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.484347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.484521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.484546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.484716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.484895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.484920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.485090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.485274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.485298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.485443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.485614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.485638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.485780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.485941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.485965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.630 qpair failed and we were unable to recover it. 00:30:05.630 [2024-07-14 04:02:24.486134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.486310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.630 [2024-07-14 04:02:24.486335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.486507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.486680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.486707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.486892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.487034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.487059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.487210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.487379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.487403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.487554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.487717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.487743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.487918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.488097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.488123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.488298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.488477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.488501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.488674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.488822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.488848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.489058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.489205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.489228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.489374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.489571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.489595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.489741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.489892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.489917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.490068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.490229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.490257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.490439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.490581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.490606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.490776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.490944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.490969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.491154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.491329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.491352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.491502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.491644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.491669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.491817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.491980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.492005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.492198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.492373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.492396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.492547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.492728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.492752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.492914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.493093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.493117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.493263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.493434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.493459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.493640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.493776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.493800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.493994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.494133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.494157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.494333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.494502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.494527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.494670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.494833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.494857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.495007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.495209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.495232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.495384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.495567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.495592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.495774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.495947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.495971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.496114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.496272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.496297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.496449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.496616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.496642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.496814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.496966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.496990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.497141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.497341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.497366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.497512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.497666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.497689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.497846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.498032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.498056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.498205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.498358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.498384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.498539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.498721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.498746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.498927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.499116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.499141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.499286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.499458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.499483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.499630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.499804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.499827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.499993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.500140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.500164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.500343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.500523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.500548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.500705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.500853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.500882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.501046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.501199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.501225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.501389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.501553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.501577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.501723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.501881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.501908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.502115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.502262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.502286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.502455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.502597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.502621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.502771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.502946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.502972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.503143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.503295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.503320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.503467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.503634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.503659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.503825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.504006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.504030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.504214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.504355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.504380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.504552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.504724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.504749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.504894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.505052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.505077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.631 qpair failed and we were unable to recover it. 00:30:05.631 [2024-07-14 04:02:24.505273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.631 [2024-07-14 04:02:24.505447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.505472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.505655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.505797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.505821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.506001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.506150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.506175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.506342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.506512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.506537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.506704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.506854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.506888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.507057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.507198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.507222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.507368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.507560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.507585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.507760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.507907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.507933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.508124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.508269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.508293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.508466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.508608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.508631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.508772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.508939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.508965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.509104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.509279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.509303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.509482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.509621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.509644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.509783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.509967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.509991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.510139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.510309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.510333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.510486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.510630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.510654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.510805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.510976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.511001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.511179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.511337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.511362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.511535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.511678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.511703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.511856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.512063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.512088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.512237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.512382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.512408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.512604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.512747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.512772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.512918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.513089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.513113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.513293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.513441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.513467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.513635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.513786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.513813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.513982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.514123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.514147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.514298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.514437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.514462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.514626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.514800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.514825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.515013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.515186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.515211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.515369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.515505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.515530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.515677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.515842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.515872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.516017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.516181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.516205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.516352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.516521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.516546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.516694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.516876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.516902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.517056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.517200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.517225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.517371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.517520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.517545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.517723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.517874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.517898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.518069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.518229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.518253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.518431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.518597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.518621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.518795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.518972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.518997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.519148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.519317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.519342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.519507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.519671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.519696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.519849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.519998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.520023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.520168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.520368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.520392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.520540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.520691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.520717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.520863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.521010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.521034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.521182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.521333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.521358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.521506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.521678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.521702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.521848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.522008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.522034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.632 qpair failed and we were unable to recover it. 00:30:05.632 [2024-07-14 04:02:24.522177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.632 [2024-07-14 04:02:24.522370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.522394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.522569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.522742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.522766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.522915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.523087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.523111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.523256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.523409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.523434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.523608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.523771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.523798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.523946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.524087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.524110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.524306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.524456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.524482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.524661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.524853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.524884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.525032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.525173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.525197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.525341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.525485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.525510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.525657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.525826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.525850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.526013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.526165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.526189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.526367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.526542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.526566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.526775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.526928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.526955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.527132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.527278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.527301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.527447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.527589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.527613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.527789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.527947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.527973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.528120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.528310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.528335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.528505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.528670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.528695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.528875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.529031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.529056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.529223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.529374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.529397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.529557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.529702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.529726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.529903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.530055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.530079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.530223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.530369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.530393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.530571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.530720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.530745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.530918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.531064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.531089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.531234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.531436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.531461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.531604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.531749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.531774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.531978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.532118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.532143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.532346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.532529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.532556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.532705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.532848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.532879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.533031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.533185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.533209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.533388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.533562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.533587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.533764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.533945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.533971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.534116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.534266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.534292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.534468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.534617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.534642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.534785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.534936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.534963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.535149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.535298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.535324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.535472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.535651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.535676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.535824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.536011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.536041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.536199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.536382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.536407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.536580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.536750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.536775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.633 [2024-07-14 04:02:24.536971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.537116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.633 [2024-07-14 04:02:24.537141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.633 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.537293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.537470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.537496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.537636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.537778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.537802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.537977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.538128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.538152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.538330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.538478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.538502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.538667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.538817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.538841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.539062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.539209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.539232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.539411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.539581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.539610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.539783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.539967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.539992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.540188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.540358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.540382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.540577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.540736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.540760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.540935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.541090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.541117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.541259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.541407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.541432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.541576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.541723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.541746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.909 [2024-07-14 04:02:24.541952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.542101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.909 [2024-07-14 04:02:24.542126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.909 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.542272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.542471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.542495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.542689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.542833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.542858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.543039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.543191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.543219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.543396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.543545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.543570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.543736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.543912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.543938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.544087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.544270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.544295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.544437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.544581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.544605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.544778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.544945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.544970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.545122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.545275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.545299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.545442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.545580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.545604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.545787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.545934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.545959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.546118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.546289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.546313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.546505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.546683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.546707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.546897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.547072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.547097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.547241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.547392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.547416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.547579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.547751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.547775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.547922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.548102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.548126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.548272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.548422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.548446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.548620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.548821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.548844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.549004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.549151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.549176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.549332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.549478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.549502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.549673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.549820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.549844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.550024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.550187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.550211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.550395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.550533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.550558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.550733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.550909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.550934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.551076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.551215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.551238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.551404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.551594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.551619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.551791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.551960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.551984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.552166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.552310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.552333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.552512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.552653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.910 [2024-07-14 04:02:24.552677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.910 qpair failed and we were unable to recover it. 00:30:05.910 [2024-07-14 04:02:24.552823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.553020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.553044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.553218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.553357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.553381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.553532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.553684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.553710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.553873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.554013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.554037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.554183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.554320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.554344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.554489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.554678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.554702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.554880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.555053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.555077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.555242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.555411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.555436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.555608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.555753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.555778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.555945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.556108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.556134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.556284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.556438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.556462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.556602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.556746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.556770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.556963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.557145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.557170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.557326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.557503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.557527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.557670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.557875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.557900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.558067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.558243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.558269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.558442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.558617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.558642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.558835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.558986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.559011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.559155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.559330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.559354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.559515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.559673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.559697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.559886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.560035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.560060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.560231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.560375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.560399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.560540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.560681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.560705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.560853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.561003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.561028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.561205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.561343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.561368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.561543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.561687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.561711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.561852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.562034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.562059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.562229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.562424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.562448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.562623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.562811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.562835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.563015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.563186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.911 [2024-07-14 04:02:24.563210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.911 qpair failed and we were unable to recover it. 00:30:05.911 [2024-07-14 04:02:24.563382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.563558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.563582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.563760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.563923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.563949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.564101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.564291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.564316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.564490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.564634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.564658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.564816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.564988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.565013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.565185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.565325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.565350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.565491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.565631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.565655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.565804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.565963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.565987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.566129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.566287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.566311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.566466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.566608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.566632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.566779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.566924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.566949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.567144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.567286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.567310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.567483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.567627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.567652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.567829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.568035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.568060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.568200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.568393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.568417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.568566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.568717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.568741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.568884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.569056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.569080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.569232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.569393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.569418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.569587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.569785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.569809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.569980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.570146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.570170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.570325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.570468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.570493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.570643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.570806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.570830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.571018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.571172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.571196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.571347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.571515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.571539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.571683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.571851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.571882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.572061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.572203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.572227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.572375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.572517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.572542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.572709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.572877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.572903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.573079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.573253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.573279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.573424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.573594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.573619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.912 qpair failed and we were unable to recover it. 00:30:05.912 [2024-07-14 04:02:24.573781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.912 [2024-07-14 04:02:24.573953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.573977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.574120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.574295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.574320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.574488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.574657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.574681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.574832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.574991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.575016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.575165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.575302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.575327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.575468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.575613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.575637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.575817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.575965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.575991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.576139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.576284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.576310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.576503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.576677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.576701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.576848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.577033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.577059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.577208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.577349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.577373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.577540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.577698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.577722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.577880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.578029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.578054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.578209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.578381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.578405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.578550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.578725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.578749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.578903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.579088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.579113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.579278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.579424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.579448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.579625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.579786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.579811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.579962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.580111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.580135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.580313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.580511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.580535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.580682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.580853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.580885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.581064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.581241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.581265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.581445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.581591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.581615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.581761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.581907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.581932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.913 [2024-07-14 04:02:24.582089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.582240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.913 [2024-07-14 04:02:24.582265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.913 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.582415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.582581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.582606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.582811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.582958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.582984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.583161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.583321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.583345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.583489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.583647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.583674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.583827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.584030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.584055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.584218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.584391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.584415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.584567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.584715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.584740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.584902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.585054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.585079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.585249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.585395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.585420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.585596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.585746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.585771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.585912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.586087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.586112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.586263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.586408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.586432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.586624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.586761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.586785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.586933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.587103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.587129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.587284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.587444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.587467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.587639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.587821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.587846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.587995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.588144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.588167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.588335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.588481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.588505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.588682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.588827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.588858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.589025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.589200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.589224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.589387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.589587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.589611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.589759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.589922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.589947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.590122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.590277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.590301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.590473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.590616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.590640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.590822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.590964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.590988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.591138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.591309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.591334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.591512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.591685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.591708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.591860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.592064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.592089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.592240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.592393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.592421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.914 [2024-07-14 04:02:24.592590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.592767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.914 [2024-07-14 04:02:24.592793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.914 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.592942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.593146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.593170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.593334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.593484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.593508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.593645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.593782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.593806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.593988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.594134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.594158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.594326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.594479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.594505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.594691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.594841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.594869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.595013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.595194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.595219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.595371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.595546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.595570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.595711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.595851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.595897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.596074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.596218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.596243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.596390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.596564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.596588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.596746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.596921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.596946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.597112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.597248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.597273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.597422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.597572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.597596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.597770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.597937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.597963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.598136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.598282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.598305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.598510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.598657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.598681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.598825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.599035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.599061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.599204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.599346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.599373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.599550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.599700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.599724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.599873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.600017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.600042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.600209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.600382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.600405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.600551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.600714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.600737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.600910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.601068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.601092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.601258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.601423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.601447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.601622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.601766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.601790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.601932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.602106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.602130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.602299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.602459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.602483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.602625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.602794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.602818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.602999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.603170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.915 [2024-07-14 04:02:24.603194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.915 qpair failed and we were unable to recover it. 00:30:05.915 [2024-07-14 04:02:24.603366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.603542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.603566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.603730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.603874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.603899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.604071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.604219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.604244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.604388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.604532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.604557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.604751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.604894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.604919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.605077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.605245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.605269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.605431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.605615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.605641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.605816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.605983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.606008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.606152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.606330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.606353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.606511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.606656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.606681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.606823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.606980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.607004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.607158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.607338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.607363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.607531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.607723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.607747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.607926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.608073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.608097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.608248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.608413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.608438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.608602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.608786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.608810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.608978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.609179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.609203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.609352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.609504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.609529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.609719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.609916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.609941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.610106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.610254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.610278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.610418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.610567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.610591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.610745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.610947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.610972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.611137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.611338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.611362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.611513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.611658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.611682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.611830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.611973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.611998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.612147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.612325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.612349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.612493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.612638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.612662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.612832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.613007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.613031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.613171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.613340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.613364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.916 [2024-07-14 04:02:24.613524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.613699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.916 [2024-07-14 04:02:24.613724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.916 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.613895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.614068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.614093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.614283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.614427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.614452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.614615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.614817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.614842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.615000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.615173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.615197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.615369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.615526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.615550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.615703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.615858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.615888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.616065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.616207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.616232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.616381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.616527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.616552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.616700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.616848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.616880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.617041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.617185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.617210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.617378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.617519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.617544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.617691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.617838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.617862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.618057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.618218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.618242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.618415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.618554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.618579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.618770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.618912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.618938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.619089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.619237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.619261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.619430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.619580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.619605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.619754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.619907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.619933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.620107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.620285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.620310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.620457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.620604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.620628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.620770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.620940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.620965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.621113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.621252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.621277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.621468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.621660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.621685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.621889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.622036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.622060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.622236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.622385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.622410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.622555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.622701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.622725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.622902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.623082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.623106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.623268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.623441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.623467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.623659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.623859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.623890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.624045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.624223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.917 [2024-07-14 04:02:24.624248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.917 qpair failed and we were unable to recover it. 00:30:05.917 [2024-07-14 04:02:24.624438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.624611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.624635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.624808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.624969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.624994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.625164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.625368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.625393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.625573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.625775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.625800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.625975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.626124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.626148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.626289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.626466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.626489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.626637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.626813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.626837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.626995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.627148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.627172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.627351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.627505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.627529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.627740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.627897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.627922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.628100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.628238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.628262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.628438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.628583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.628608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.628787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.628960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.628985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.629157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.629294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.629319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.629489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.629692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.629717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.629892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.630072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.630095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.630274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.630443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.630467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.630618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.630788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.630812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.630995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.631158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.631183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.631330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.631494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.631519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.631701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.631892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.631917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.632123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.632273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.632299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.632464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.632644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.632669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.632853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.633006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.633033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.633189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.633362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.633386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.633534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.633678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.633702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.918 qpair failed and we were unable to recover it. 00:30:05.918 [2024-07-14 04:02:24.633861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.918 [2024-07-14 04:02:24.634029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.634054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.634201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.634381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.634405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.634587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.634763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.634788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.634937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.635107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.635132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.635280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.635441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.635466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.635607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.635782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.635806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.635971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.636114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.636140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.636298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.636447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.636470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.636626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.636761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.636785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.636961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.637103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.637127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.637296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.637468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.637492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.637643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.637784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.637808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.637986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.638180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.638205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.638355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.638534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.638558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.638739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.638889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.638913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.639078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.639256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.639281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.639476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.639651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.639675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.639877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.640025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.640049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.640199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.640373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.640397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.640575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.640744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.640768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.640946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.641121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.641146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.641303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.641470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.641493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.641668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.641811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.641835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.642003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.642150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.642174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.642323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.642512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.642536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.642715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.642858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.642888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.643069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.643213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.643237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.643384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.643533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.643557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.643703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.643844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.643874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.644031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.644175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.644201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.919 [2024-07-14 04:02:24.644343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.644516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.919 [2024-07-14 04:02:24.644540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.919 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.644692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.644861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.644893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.645037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.645197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.645222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.645367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.645533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.645560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.645727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.645891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.645918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.646082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.646246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.646270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.646465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.646654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.646678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.646849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.647027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.647052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.647221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.647398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.647422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.647592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.647753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.647777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.647954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.648130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.648154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.648333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.648509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.648534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.648695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.648874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.648898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.649095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.649237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.649266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.649430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.649601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.649626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.649770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.649923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.649948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.650100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.650267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.650292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.650433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.650581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.650606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.650744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.650921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.650945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.651096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.651268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.651294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.651463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.651607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.651632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.651799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.651972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.651997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.652174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.652326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.652350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.652492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.652633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.652662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.652807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.652979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.653005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.653179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.653329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.653355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.653531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.653703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.653728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.653873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.654047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.654071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.654239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.654389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.654413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.654604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.654746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.654769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.654944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.655121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.920 [2024-07-14 04:02:24.655146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.920 qpair failed and we were unable to recover it. 00:30:05.920 [2024-07-14 04:02:24.655298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.655446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.655471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.655643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.655787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.655813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.655989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.656141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.656172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.656328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.656470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.656494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.656669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.656825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.656849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.657005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.657152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.657176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.657368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.657512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.657536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.657681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.657857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.657898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.658041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.658187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.658212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.658352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.658524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.658550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.658716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.658891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.658916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.659094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.659285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.659310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.659460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.659602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.659627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.659811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.659965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.659998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.660159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.660312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.660337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.660477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.660620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.660643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.660816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.660981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.661006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.661157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.661333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.661358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.661518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.661664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.661689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.661848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.662006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.662032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.662190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.662371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.662395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.662577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.662729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.662754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.662931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.663093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.663117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.663272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.663442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.663467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.663643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.663785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.663808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.663976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.664153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.664178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.664341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.664519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.664543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.664724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.664888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.664914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.665063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.665241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.665266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.665417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.665560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.665583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.921 qpair failed and we were unable to recover it. 00:30:05.921 [2024-07-14 04:02:24.665754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.921 [2024-07-14 04:02:24.665926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.665952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.666108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.666278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.666302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.666465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.666613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.666637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.666804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.666975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.667000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.667177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.667321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.667344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.667496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.667673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.667697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.667843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.668043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.668069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.668214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.668359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.668384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.668556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.668722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.668748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.668913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.669068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.669093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.669241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.669420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.669445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.669649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.669795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.669821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.669992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.670162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.670186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.670369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.670519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.670545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.670720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.670878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.670904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.671079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.671224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.671250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.671397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.671573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.671598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.671764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.671918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.671943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.672088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.672238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.672262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.672400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.672573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.672598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.672748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.672899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.672925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.673077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.673215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.673240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.673401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.673536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.673560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.673718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.673860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.673890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.674066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.674217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.674241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.674392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.674535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.674558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.922 qpair failed and we were unable to recover it. 00:30:05.922 [2024-07-14 04:02:24.674747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.922 [2024-07-14 04:02:24.674914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.674938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.675094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.675247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.675272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.675418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.675561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.675585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.675771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.675952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.675977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.676176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.676333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.676358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.676502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.676665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.676691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.676858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.677037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.677061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.677215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.677393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.677418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.677622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.677769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.677793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.677966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.678159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.678184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.678361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.678533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.678557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.678732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.678881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.678907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.679070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.679274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.679299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.679454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.679629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.679653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.679831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.680003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.680029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.680195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.680346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.680370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.680553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.680712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.680736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.680888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.681033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.681057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.681239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.681417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.681441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.681599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.681741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.681765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.681940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.682095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.682120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.682266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.682412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.682436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.682592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.682747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.682772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.682914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.683057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.683082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.683261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.683454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.683478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.683627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.683819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.683842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.683993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.684140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.684165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.684333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.684517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.684543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.684698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.684844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.684875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.685056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.685206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.685230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.923 qpair failed and we were unable to recover it. 00:30:05.923 [2024-07-14 04:02:24.685404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.923 [2024-07-14 04:02:24.685545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.685570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.685736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.685910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.685935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.686087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.686261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.686286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.686459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.686633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.686657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.686799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.687010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.687036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.687184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.687330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.687355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.687512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.687654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.687679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.687854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.688039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.688065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.688209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.688386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.688409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.688588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.688734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.688758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.688928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.689072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.689097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.689244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.689438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.689462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.689626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.689801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.689826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.689992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.690161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.690186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.690335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.690489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.690514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.690656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.690820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.690844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.691018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.691161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.691187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.691352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.691559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.691584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.691732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.691878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.691903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.692067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.692244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.692269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.692450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.692598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.692625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.692773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.692936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.692962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.693130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.693312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.693337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.693517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.693680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.693705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.693892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.694073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.694098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.694242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.694401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.694425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.694597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.694760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.694784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.694937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.695094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.695119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.695299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.695474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.695498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.695698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.695844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.695876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.924 [2024-07-14 04:02:24.696052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.696226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.924 [2024-07-14 04:02:24.696250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.924 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.696398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.696580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.696606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.696762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.696914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.696939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.697094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.697275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.697298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.697487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.697629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.697653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.697802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.697983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.698008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.698185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.698329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.698355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.698553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.698704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.698733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.698908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.699056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.699082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.699256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.699396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.699420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.699595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.699763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.699788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.699971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.700158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.700183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.700334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.700487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.700511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.700686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.700832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.700857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.701039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.701184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.701208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.701392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.701552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.701577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.701722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.701896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.701922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.702097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.702251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.702280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.702432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.702583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.702608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.702760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.702964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.702990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.703137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.703280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.703304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.703507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.703647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.703672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.703877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.704039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.704063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.704248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.704405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.704430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.704618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.704772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.704797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.704944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.705083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.705108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.705278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.705424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.705449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.705596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.705767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.705795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.705957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.706112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.706138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.706295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.706440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.706465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.706646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.706790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.925 [2024-07-14 04:02:24.706816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.925 qpair failed and we were unable to recover it. 00:30:05.925 [2024-07-14 04:02:24.706977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.707150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.707175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.707328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.707505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.707529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.707676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.707857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.707888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.708054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.708228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.708252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.708392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.708566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.708591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.708774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.708920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.708944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.709087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.709232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.709263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.709432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.709574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.709601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.709803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.709950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.709975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.710148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.710311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.710335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.710513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.710672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.710697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.710855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.711026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.711051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.711228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.711428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.711453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.711597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.711739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.711764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.711929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.712122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.712147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.712321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.712477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.712504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.712671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.712850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.712889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.713050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.713192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.713217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.713363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.713536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.713561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.713715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.713861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.713894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.714070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.714239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.714264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.714412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.714611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.714636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.714795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.714941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.714966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.715114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.715284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.715309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.715486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.715655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.715679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.715821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.715967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.715992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.926 qpair failed and we were unable to recover it. 00:30:05.926 [2024-07-14 04:02:24.716165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.926 [2024-07-14 04:02:24.716340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.716366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.716556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.716701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.716727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.716878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.717027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.717052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.717205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.717354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.717381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.717522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.717665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.717691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.717862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.718039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.718064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.718226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.718390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.718414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.718585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.718743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.718768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.718917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.719061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.719085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.719257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.719409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.719434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.719610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.719767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.719791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.719970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.720115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.720139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.720280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.720454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.720479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.720661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.720827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.720851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.721024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.721177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.721202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.721347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.721484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.721508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.721674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.721819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.721844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.722056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.722197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.722221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.722386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.722529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.722554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.722708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.722877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.722902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.723113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.723254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.723278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.723426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.723626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.723651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.723803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.723946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.723972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.724143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.724286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.724311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.724461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.724602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.724626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.724803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.724972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.724998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.725144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.725311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.725336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.725484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.725628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.725652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.725829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.726033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.726059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.726213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.726365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.726390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.927 qpair failed and we were unable to recover it. 00:30:05.927 [2024-07-14 04:02:24.726534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.927 [2024-07-14 04:02:24.726677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.726701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.726885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.727024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.727048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.727214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.727415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.727439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.727586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.727751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.727775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.727927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.728093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.728118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.728263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.728413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.728439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.728609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.728758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.728782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.728978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.729125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.729150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.729333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.729487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.729511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.729674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.729847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.729876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.730050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.730232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.730257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.730411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.730571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.730595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.730772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.730925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.730952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.731132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.731394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.731419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.731595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.731740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.731764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.731917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.732089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.732114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.732266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.732459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.732484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.732632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.732775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.732800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.732975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.733125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.733151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.733318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.733478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.733502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.733646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.733793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.733816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.733984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.734129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.734155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.734360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.734517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.734542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.734695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.734876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.734901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.735047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.735191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.735216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.735381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.735553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.735578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.735724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.735924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.735949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.736099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.736241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.736266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.736456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.736603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.736628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.736796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.736951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.736976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.928 qpair failed and we were unable to recover it. 00:30:05.928 [2024-07-14 04:02:24.737130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.928 [2024-07-14 04:02:24.737281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.737307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.737502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.737640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.737664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.737826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.738005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.738031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.738206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.738368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.738394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.738542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.738747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.738771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.738952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.739111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.739136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.739309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.739459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.739484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.739664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.739841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.739876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.740025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.740166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.740190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.740346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.740485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.740509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.740746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.740891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.740916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.741066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.741248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.741273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.741433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.741635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.741659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.741808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.742003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.742028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.742181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.742351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.742375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.742538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.742711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.742736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.742882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.743054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.743079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.743226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.743378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.743403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.743577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.743730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.743754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.743931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.744076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.744103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.744264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.744404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.744429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.744605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.744755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.744780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.744959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.745122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.745147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.745298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.745476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.745502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.745669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.745844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.745875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.746023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.746186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.746211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.746371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.746532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.746556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.746736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.746906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.746931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.747106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.747250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.747274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.747420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.747575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.747599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.929 qpair failed and we were unable to recover it. 00:30:05.929 [2024-07-14 04:02:24.747742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.929 [2024-07-14 04:02:24.747890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.747914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.748069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.748254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.748280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.748443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.748603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.748627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.748783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.748924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.748949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.749136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.749283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.749307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.749497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.749674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.749701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.749880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.750027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.750052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.750195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.750374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.750398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.750576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.750723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.750747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.750900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.751055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.751081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.751231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.751383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.751409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.751566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.751740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.751765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.751936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.752077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.752102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.752251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.752435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.752458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.752605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.752742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.752766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.752945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.753118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.753143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.753292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.753480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.753504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.753648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.753821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.753845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.754010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.754190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.754215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.754392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.754570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.754594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.754747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.754892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.754918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.755091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.755262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.755290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.755433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.755577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.755603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.755784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.755934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.755959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.756126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.756283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.756307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.756447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.756622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.756646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.756791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.756962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.756988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.757137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.757286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.757310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.757465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.757618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.757643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.757816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.757960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.757986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.930 [2024-07-14 04:02:24.758162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.758332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.930 [2024-07-14 04:02:24.758356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.930 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.758497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.758671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.758698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.758840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.759002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.759027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.759174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.759342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.759367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.759529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.759706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.759731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.759915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.760055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.760081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.760254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.760410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.760434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.760699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.760858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.760891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.761050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.761211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.761236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.761385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.761555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.761579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.761731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.761912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.761937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.762119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.762281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.762312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.762466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.762613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.762637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.762811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.762991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.763017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.763166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.763317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.763341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.763528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.763700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.763724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.763900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.764049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.764073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.764257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.764400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.764423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.764565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.764705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.764731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.764888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.765066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.765091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.765257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.765451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.765475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.765691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.765839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.765875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.766020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.766166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.766190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.766337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.766483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.766507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.766711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.766880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.766905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.767048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.767196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.767219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.931 qpair failed and we were unable to recover it. 00:30:05.931 [2024-07-14 04:02:24.767369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.931 [2024-07-14 04:02:24.767536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.767559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.767742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.767922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.767947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.768122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.768266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.768290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.768552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.768726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.768751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.768931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.769101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.769126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.769274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.769420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.769445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.769614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.769765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.769791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.769941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.770178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.770203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.770353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.770528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.770552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.770702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.770843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.770882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.771148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.771342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.771367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.771545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.771691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.771716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.771882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.772056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.772081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.772267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.772431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.772455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.772630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.772786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.772814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.773072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.773217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.773241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.773423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.773677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.773702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.773848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.774002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.774027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.774188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.774347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.774372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.774548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.774690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.774715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.774887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.775032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.775058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.775199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.775364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.775389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.775535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.775704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.775728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.775904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.776048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.776073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.776223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.776373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.776399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.776546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.776693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.776717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.776899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.777049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.777073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.777224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.777373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.777400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.777570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.777713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.777737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.777891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.778042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.778067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.932 qpair failed and we were unable to recover it. 00:30:05.932 [2024-07-14 04:02:24.778240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.778380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.932 [2024-07-14 04:02:24.778405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.778554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.778722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.778747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.778893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.779052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.779077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.779234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.779386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.779412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.779562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.779717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.779741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.779913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.780055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.780079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.780240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.780385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.780412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.780595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.780742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.780767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.780913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.781055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.781080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.781251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.781421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.781446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.781605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.781760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.781786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.781938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.782077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.782101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.782268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.782595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.782620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.782823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.782996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.783022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.783183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.783381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.783406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.783576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 04:02:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.933 [2024-07-14 04:02:24.783747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.783773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 04:02:24 -- common/autotest_common.sh@852 -- # return 0 00:30:05.933 [2024-07-14 04:02:24.783955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 04:02:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:05.933 [2024-07-14 04:02:24.784105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.784130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 04:02:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.933 [2024-07-14 04:02:24.784305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:30:05.933 [2024-07-14 04:02:24.784452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.784477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.784633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.784783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.784808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.784975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.785128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.785152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.785315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.785460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.785483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.785623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.785782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.785806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.785961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.786112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.786136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.786291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.786428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.786453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.786598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.786747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.786774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.786953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.787103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.787128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.787303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.787443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.787467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.787631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.787791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.787815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.787962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.788115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.788142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.933 [2024-07-14 04:02:24.788289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.788460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.933 [2024-07-14 04:02:24.788485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.933 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.788625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.788775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.788798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.788942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.789118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.789142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.789296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.789437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.789462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.789665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.789837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.789861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.790026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.790189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.790215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.790377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.790554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.790580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.790765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.790934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.790960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.791120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.791292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.791316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.791472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.791625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.791650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.791825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.791998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.792024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.792230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.792423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.792447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.792620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.792775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.792800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.792948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.793093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.793118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.793298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.793475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.793499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.793679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.793828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.793853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.794052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.794205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.794229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.794402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.794574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.794602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.794748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.794929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.794954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.795131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.795309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.795333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.795476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.795619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.795644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.795783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.795960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.795986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.796130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.796282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.796306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.796499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.796648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.796673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.796842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.797024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.797050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.797192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.797357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.797382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.797534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.797681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.797711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.797889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.798039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.798065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.798218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.798392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.798418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.798596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.798742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.798766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.934 [2024-07-14 04:02:24.798944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.799119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.934 [2024-07-14 04:02:24.799143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.934 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.799337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.799509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.799534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.799713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.799877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.799902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.800046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.800195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.800220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.800367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.800538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.800563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.800708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.800848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.800880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.801033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.801205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.801234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.801425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.801573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.801599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.801741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.801896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.801922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.802075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.802219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.802243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.802387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.802580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.802604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.802750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.802898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.802925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.803082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.803232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.803257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.803426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.803604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.803629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.803770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.803916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.803941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.804146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.804291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.804315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.804468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.804614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.804649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.804800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.804969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 04:02:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.935 [2024-07-14 04:02:24.804995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.805152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 04:02:24 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.935 [2024-07-14 04:02:24.805287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.805311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 04:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.935 [2024-07-14 04:02:24.805477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:30:05.935 [2024-07-14 04:02:24.805623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.805647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.805815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.805997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.806023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.806169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.806332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.806357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.806501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.806679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.806703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.806863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.807039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.807064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.807241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.807402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.807427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.807595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.807768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.935 [2024-07-14 04:02:24.807791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.935 qpair failed and we were unable to recover it. 00:30:05.935 [2024-07-14 04:02:24.807970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.808136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.808161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.808304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.808469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.808494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.808673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.808848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.808878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.809057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.809235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.809259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.809441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.809585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.809609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.809761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.809917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.809943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.810116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.810263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.810288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.810467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.810766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.810790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.810957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.811094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.811118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.811275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.811451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.811474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.811639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.811811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.811835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.811991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.812146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.812170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.812347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.812491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.812514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.812671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.812828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.812853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.813021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.813168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.813192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.813358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.813505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.813531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.813676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.813848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.813881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.814069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.814216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.814240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.814413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.814557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.814582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.814739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.814899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.814925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.815085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.815269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.815294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.815443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.815618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.815642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.815823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.815975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.816002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.816182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.816336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.816362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.816522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.816705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.816730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.816884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.817044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.817068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.817262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.817413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.817439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.817616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.817774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.817798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.817986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.818166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.818190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.936 [2024-07-14 04:02:24.818400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.818545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.936 [2024-07-14 04:02:24.818569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.936 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.818721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.818897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.818922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.819075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.819234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.819259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.819424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.819577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.819603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.819779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.819927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.819952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.820108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.820263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.820290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.820443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.820627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.820651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.820808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.820950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.820976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.821138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.821279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.821303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.821453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.821634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.821665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.821876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.822033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.822057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.822203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.822359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.822386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.822545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.822692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.822716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.822862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.823056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.823083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.823240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.823423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.823447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.823587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.823729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.823754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.823926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.824077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.824104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.824284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.824445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.824469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.824657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.824798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.824829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.825002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.825225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.825250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.825434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.825612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.825637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.825782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.825932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.825957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.826132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.826280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.826304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.826494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.826634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.826659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.826837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.827047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.827071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.827219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.827364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.827388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.827563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.827702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.827727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.827902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.828125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.828154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 [2024-07-14 04:02:24.828296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.828449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.828473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 Malloc0 00:30:05.937 [2024-07-14 04:02:24.828649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.828805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 [2024-07-14 04:02:24.828828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.937 qpair failed and we were unable to recover it. 00:30:05.937 04:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.937 [2024-07-14 04:02:24.828977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.937 04:02:24 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:05.937 [2024-07-14 04:02:24.829131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.829155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.938 qpair failed and we were unable to recover it. 00:30:05.938 04:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.938 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:30:05.938 [2024-07-14 04:02:24.829320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.829496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.829520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.938 qpair failed and we were unable to recover it. 00:30:05.938 [2024-07-14 04:02:24.829669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.829844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.829874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.938 qpair failed and we were unable to recover it. 00:30:05.938 [2024-07-14 04:02:24.830043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.830220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.830243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.938 qpair failed and we were unable to recover it. 00:30:05.938 [2024-07-14 04:02:24.830392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.830548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.938 [2024-07-14 04:02:24.830573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:05.938 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.830729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.830876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.830903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.831056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.831262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.831286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.831429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.831611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.831635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.831781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.831926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.831951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.832095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.832181] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.201 [2024-07-14 04:02:24.832241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.832265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.832464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.832640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.832663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.832818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.832972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.832997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.833148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.833319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.833344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.833521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.833666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.833690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.833855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.834014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.834037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.834196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.834372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.834396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.834572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.834720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.834745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.834950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.835130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.835154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.835329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.835496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.835520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.835723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.835878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.835904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.836052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.836200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.836229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.201 [2024-07-14 04:02:24.836409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.836556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.201 [2024-07-14 04:02:24.836580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.201 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.836749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.836898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.836924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.837076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.837236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.837260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.837434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.837580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.837603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.837750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.837903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.837928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.838076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.838251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.838276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.838449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.838616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.838640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.838833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.838987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.839012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.839180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.839384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.839408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.839557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.839735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.839763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.839910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.840063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.840087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.840240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.840406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 04:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.202 [2024-07-14 04:02:24.840430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 04:02:24 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.202 [2024-07-14 04:02:24.840603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 04:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.202 [2024-07-14 04:02:24.840753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.840778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:30:06.202 [2024-07-14 04:02:24.840924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.841071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.841095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.841257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.841398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.841421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.841584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.841727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.841751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.841928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.842073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.842098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.842273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.842427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.842453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.842626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.842771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.842795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.842958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.843105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.843130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.202 [2024-07-14 04:02:24.843310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.843455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.202 [2024-07-14 04:02:24.843480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.202 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.843627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.843786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.843810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.843981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.844164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.844188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.844335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.844508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.844534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.844678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.844845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.844875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.845068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.845256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.845281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.845426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.845594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.845618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.845763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.845932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.845957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.846108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.846298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.846324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.846468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.846641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.846665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.846813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.846990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.847015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.847159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.847334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.847360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.847544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.847686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.847710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.847881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.848026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.848050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.848237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.848382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.848407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 04:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.203 [2024-07-14 04:02:24.848565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 04:02:24 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.203 [2024-07-14 04:02:24.848743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.848767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 04:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:30:06.203 [2024-07-14 04:02:24.848933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.849084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.849108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.849275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.849424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.849449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.849624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.849765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.849790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.849941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.850090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.850114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.850312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.850476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.850501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.850682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.850861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.850894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.203 qpair failed and we were unable to recover it. 00:30:06.203 [2024-07-14 04:02:24.851060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.203 [2024-07-14 04:02:24.851231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.851256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.851435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.851585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.851610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.851807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.851996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.852021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.852165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.852312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.852336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.852484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.852649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.852674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.852842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.853038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.853063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.853216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.853394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.853418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.853568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.853735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.853759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.853912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.854085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.854109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.854256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.854425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.854450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.854598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.854751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.854775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.854950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.855100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.855123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.855284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.855458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.855482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.855648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.855821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.855846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.856015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.856171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.856196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.856372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 04:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.204 [2024-07-14 04:02:24.856513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.856538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 04:02:24 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.204 [2024-07-14 04:02:24.856686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 04:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.204 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:30:06.204 [2024-07-14 04:02:24.856836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.856860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.857029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.857200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.857225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.857380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.857561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.857586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.857731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.857906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.857936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.204 [2024-07-14 04:02:24.858093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.858267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.204 [2024-07-14 04:02:24.858291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.204 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.858474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.858624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.858649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.858814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.858993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.859019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.859171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.859316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.859341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.859486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.859666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.859689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.859836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.860033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.860059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.860241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.860393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.205 [2024-07-14 04:02:24.860417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f954c000b90 with addr=10.0.0.2, port=4420 00:30:06.205 [2024-07-14 04:02:24.860420] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.863004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.863186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.863215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.205 [2024-07-14 04:02:24.863230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.205 [2024-07-14 04:02:24.863244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.205 [2024-07-14 04:02:24.863280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 04:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.205 04:02:24 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.205 04:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.205 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:30:06.205 04:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.205 04:02:24 -- host/target_disconnect.sh@58 -- # wait 2511100 00:30:06.205 [2024-07-14 04:02:24.872827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.872997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.873025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.205 [2024-07-14 04:02:24.873040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.205 [2024-07-14 04:02:24.873053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.205 [2024-07-14 04:02:24.873084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.882828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.882993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.883022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.205 [2024-07-14 04:02:24.883036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.205 [2024-07-14 04:02:24.883049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.205 [2024-07-14 04:02:24.883080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.892784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.892945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.892977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.205 [2024-07-14 04:02:24.892993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.205 [2024-07-14 04:02:24.893006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.205 [2024-07-14 04:02:24.893035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.902817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.902983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.903011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.205 [2024-07-14 04:02:24.903025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.205 [2024-07-14 04:02:24.903039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.205 [2024-07-14 04:02:24.903068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.912849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.913033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.913060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.205 [2024-07-14 04:02:24.913075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.205 [2024-07-14 04:02:24.913088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.205 [2024-07-14 04:02:24.913117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.922856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.923014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.923040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.205 [2024-07-14 04:02:24.923055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.205 [2024-07-14 04:02:24.923068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.205 [2024-07-14 04:02:24.923098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.205 qpair failed and we were unable to recover it. 00:30:06.205 [2024-07-14 04:02:24.932872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.205 [2024-07-14 04:02:24.933031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.205 [2024-07-14 04:02:24.933058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:24.933073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:24.933086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:24.933122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:24.942913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:24.943069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:24.943096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:24.943111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:24.943123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:24.943152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:24.952943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:24.953095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:24.953122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:24.953137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:24.953149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:24.953180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:24.962990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:24.963138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:24.963165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:24.963180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:24.963193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:24.963224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:24.972976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:24.973130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:24.973156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:24.973171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:24.973184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:24.973213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:24.983021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:24.983173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:24.983205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:24.983220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:24.983233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:24.983262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:24.993072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:24.993221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:24.993248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:24.993262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:24.993275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:24.993304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:25.003119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:25.003267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:25.003294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:25.003309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:25.003322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:25.003350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:25.013117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:25.013335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:25.013361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:25.013375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:25.013388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:25.013417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:25.023382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:25.023565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:25.023591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:25.023606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:25.023625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.206 [2024-07-14 04:02:25.023654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.206 qpair failed and we were unable to recover it. 00:30:06.206 [2024-07-14 04:02:25.033214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.206 [2024-07-14 04:02:25.033372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.206 [2024-07-14 04:02:25.033398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.206 [2024-07-14 04:02:25.033412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.206 [2024-07-14 04:02:25.033425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.033454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.043242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.043396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.043422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.043436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.043449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.043479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.053286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.053443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.053468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.053483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.053496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.053527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.063255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.063420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.063445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.063459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.063472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.063501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.073331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.073489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.073514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.073529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.073541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.073571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.083302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.083454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.083479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.083494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.083506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.083537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.093322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.093475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.093500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.093515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.093528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.093557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.103389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.103548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.103576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.103590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.103603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.103632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.113409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.113564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.113591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.113606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.113624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.113654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.123430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.123596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.123622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.123637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.123649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.123690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.207 [2024-07-14 04:02:25.133431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.207 [2024-07-14 04:02:25.133599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.207 [2024-07-14 04:02:25.133624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.207 [2024-07-14 04:02:25.133639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.207 [2024-07-14 04:02:25.133651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.207 [2024-07-14 04:02:25.133682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.207 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.143518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.143674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.143700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.143715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.143728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.466 [2024-07-14 04:02:25.143757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.153532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.153680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.153706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.153721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.153734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.466 [2024-07-14 04:02:25.153763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.163579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.163734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.163761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.163780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.163794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.466 [2024-07-14 04:02:25.163824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.173594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.173793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.173819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.173834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.173848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.466 [2024-07-14 04:02:25.173884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.183585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.183779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.183805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.183819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.183832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.466 [2024-07-14 04:02:25.183862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.193620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.193811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.193838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.193858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.193881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.466 [2024-07-14 04:02:25.193914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.203638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.203786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.203813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.203837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.203851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.466 [2024-07-14 04:02:25.203888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.466 qpair failed and we were unable to recover it. 00:30:06.466 [2024-07-14 04:02:25.213717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.466 [2024-07-14 04:02:25.213896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.466 [2024-07-14 04:02:25.213922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.466 [2024-07-14 04:02:25.213937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.466 [2024-07-14 04:02:25.213950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.213980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.223669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.223826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.223852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.223876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.223892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.223921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.233743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.233906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.233932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.233947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.233960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.233990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.243734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.243892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.243919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.243933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.243946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.243975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.253801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.253996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.254023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.254038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.254050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.254081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.263764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.263921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.263948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.263962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.263974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.264015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.273805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.273960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.273987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.274001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.274015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.274045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.284001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.284176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.284204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.284223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.284237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.284267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.293933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.294122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.294162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.294186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.294201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.294233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.303939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.304113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.304139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.304154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.304167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.304196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.313970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.314162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.314188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.467 [2024-07-14 04:02:25.314202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.467 [2024-07-14 04:02:25.314215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.467 [2024-07-14 04:02:25.314246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.467 qpair failed and we were unable to recover it. 00:30:06.467 [2024-07-14 04:02:25.323941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.467 [2024-07-14 04:02:25.324095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.467 [2024-07-14 04:02:25.324121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.324135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.324149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.324179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.333972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.334145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.334171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.334186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.334200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.334235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.344089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.344247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.344272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.344287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.344301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.344331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.354056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.354222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.354248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.354263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.354276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.354304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.364075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.364233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.364258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.364272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.364285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.364314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.374133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.374336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.374361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.374376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.374390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.374418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.384130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.384283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.384314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.384329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.384342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.384371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.394254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.394402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.394428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.394442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.394456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.394486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.468 [2024-07-14 04:02:25.404206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.468 [2024-07-14 04:02:25.404396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.468 [2024-07-14 04:02:25.404421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.468 [2024-07-14 04:02:25.404436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.468 [2024-07-14 04:02:25.404449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.468 [2024-07-14 04:02:25.404478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.468 qpair failed and we were unable to recover it. 00:30:06.727 [2024-07-14 04:02:25.414227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.727 [2024-07-14 04:02:25.414381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.727 [2024-07-14 04:02:25.414407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.727 [2024-07-14 04:02:25.414421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.727 [2024-07-14 04:02:25.414434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.727 [2024-07-14 04:02:25.414463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.727 qpair failed and we were unable to recover it. 00:30:06.727 [2024-07-14 04:02:25.424229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.727 [2024-07-14 04:02:25.424385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.727 [2024-07-14 04:02:25.424410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.727 [2024-07-14 04:02:25.424425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.727 [2024-07-14 04:02:25.424438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.727 [2024-07-14 04:02:25.424472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.727 qpair failed and we were unable to recover it. 00:30:06.727 [2024-07-14 04:02:25.434323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.727 [2024-07-14 04:02:25.434476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.727 [2024-07-14 04:02:25.434501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.727 [2024-07-14 04:02:25.434516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.727 [2024-07-14 04:02:25.434529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.727 [2024-07-14 04:02:25.434557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.727 qpair failed and we were unable to recover it. 00:30:06.727 [2024-07-14 04:02:25.444270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.444422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.444447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.444461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.444473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.444503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.454321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.454525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.454550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.454564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.454577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.454607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.464330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.464481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.464507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.464521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.464534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.464563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.474405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.474557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.474588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.474604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.474617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.474648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.484404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.484552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.484578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.484592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.484605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.484634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.494455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.494611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.494636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.494652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.494665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.494695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.504473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.504631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.504656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.504670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.504683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.504712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.514517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.514669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.514694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.514709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.514727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.514757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.524546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.524708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.524734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.524749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.524761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.524803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.534670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.534839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.534872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.534892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.534905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.534935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.728 [2024-07-14 04:02:25.544598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.728 [2024-07-14 04:02:25.544760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.728 [2024-07-14 04:02:25.544786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.728 [2024-07-14 04:02:25.544801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.728 [2024-07-14 04:02:25.544814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:06.728 [2024-07-14 04:02:25.544843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.728 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.554659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.554861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.554904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.554921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.554934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.554968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.564670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.564852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.564887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.564903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.564916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.564946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.574715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.574873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.574900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.574914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.574927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.574959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.584759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.584934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.584961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.584976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.584989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.585019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.594757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.594916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.594943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.594957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.594971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.595000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.604779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.604940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.604967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.604982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.605000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.605032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.614805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.614976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.615003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.615017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.615030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.615060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.624843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.625006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.625033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.625047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.625060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.625089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.634911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.635088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.635115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.635129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.635142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.635184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.644929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.645094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.645121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.645135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.645149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.645178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.654934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.655089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.655115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.655129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.655143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.729 [2024-07-14 04:02:25.655172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.729 qpair failed and we were unable to recover it. 00:30:06.729 [2024-07-14 04:02:25.664964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.729 [2024-07-14 04:02:25.665123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.729 [2024-07-14 04:02:25.665149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.729 [2024-07-14 04:02:25.665163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.729 [2024-07-14 04:02:25.665176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.730 [2024-07-14 04:02:25.665206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.730 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.674981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.675137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.675163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.675177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.675191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.675220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.685095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.685242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.685268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.685282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.685295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.685325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.695064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.695228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.695255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.695275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.695289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.695319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.705134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.705337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.705363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.705377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.705390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.705431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.715139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.715294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.715320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.715334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.715347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.715377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.725130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.725279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.725305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.725319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.725333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.725362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.735188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.735348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.735373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.735387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.735400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.735429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.745229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.745381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.745407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.745421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.745435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.745466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.755272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.755429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.755455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.755469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.755482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.755512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.765268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.765417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.765443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.765457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.765470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.989 [2024-07-14 04:02:25.765500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.989 qpair failed and we were unable to recover it. 00:30:06.989 [2024-07-14 04:02:25.775392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.989 [2024-07-14 04:02:25.775544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.989 [2024-07-14 04:02:25.775570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.989 [2024-07-14 04:02:25.775583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.989 [2024-07-14 04:02:25.775597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.775626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.785399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.785555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.785581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.785601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.785613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.785648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.795382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.795536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.795562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.795576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.795589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.795618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.805388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.805539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.805564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.805579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.805592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.805621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.815441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.815636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.815662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.815675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.815688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.815719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.825520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.825678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.825704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.825718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.825731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.825762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.835459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.835616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.835641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.835654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.835666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.835695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.845496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.845663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.845689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.845704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.845717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.845747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.855634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.855832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.855857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.855881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.855896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.855926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.865584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.865777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.865803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.865818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.865841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.865877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.875569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.875720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.875763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.875778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.875791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.875821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.885755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.885930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.885958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.885975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.885990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.886021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.895659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.895814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.895841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.895860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.895882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.895914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.905699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.905861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.905905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.905919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.905932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.905962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.990 qpair failed and we were unable to recover it. 00:30:06.990 [2024-07-14 04:02:25.915717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.990 [2024-07-14 04:02:25.915891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.990 [2024-07-14 04:02:25.915918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.990 [2024-07-14 04:02:25.915932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.990 [2024-07-14 04:02:25.915945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.990 [2024-07-14 04:02:25.915996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.991 qpair failed and we were unable to recover it. 00:30:06.991 [2024-07-14 04:02:25.925731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.991 [2024-07-14 04:02:25.925892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.991 [2024-07-14 04:02:25.925919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.991 [2024-07-14 04:02:25.925933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.991 [2024-07-14 04:02:25.925946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:06.991 [2024-07-14 04:02:25.925977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:06.991 qpair failed and we were unable to recover it. 00:30:07.251 [2024-07-14 04:02:25.935780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.251 [2024-07-14 04:02:25.935946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.251 [2024-07-14 04:02:25.935972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.251 [2024-07-14 04:02:25.935987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.251 [2024-07-14 04:02:25.936000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.251 [2024-07-14 04:02:25.936031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.251 qpair failed and we were unable to recover it. 00:30:07.251 [2024-07-14 04:02:25.945784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.251 [2024-07-14 04:02:25.945949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.251 [2024-07-14 04:02:25.945975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.251 [2024-07-14 04:02:25.945989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.251 [2024-07-14 04:02:25.946003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.251 [2024-07-14 04:02:25.946033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.251 qpair failed and we were unable to recover it. 00:30:07.251 [2024-07-14 04:02:25.955898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.251 [2024-07-14 04:02:25.956057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.251 [2024-07-14 04:02:25.956083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.251 [2024-07-14 04:02:25.956097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.251 [2024-07-14 04:02:25.956110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.251 [2024-07-14 04:02:25.956139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.251 qpair failed and we were unable to recover it. 00:30:07.251 [2024-07-14 04:02:25.965829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.251 [2024-07-14 04:02:25.965993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.251 [2024-07-14 04:02:25.966025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.251 [2024-07-14 04:02:25.966040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.251 [2024-07-14 04:02:25.966054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.251 [2024-07-14 04:02:25.966096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.251 qpair failed and we were unable to recover it. 00:30:07.251 [2024-07-14 04:02:25.975964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.251 [2024-07-14 04:02:25.976119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.251 [2024-07-14 04:02:25.976146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.251 [2024-07-14 04:02:25.976160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.251 [2024-07-14 04:02:25.976173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.251 [2024-07-14 04:02:25.976204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.251 qpair failed and we were unable to recover it. 00:30:07.251 [2024-07-14 04:02:25.985931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.251 [2024-07-14 04:02:25.986105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.251 [2024-07-14 04:02:25.986131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.251 [2024-07-14 04:02:25.986145] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.251 [2024-07-14 04:02:25.986158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.251 [2024-07-14 04:02:25.986187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.251 qpair failed and we were unable to recover it. 00:30:07.251 [2024-07-14 04:02:25.995911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.251 [2024-07-14 04:02:25.996067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.251 [2024-07-14 04:02:25.996093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.251 [2024-07-14 04:02:25.996107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:25.996120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:25.996149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.005947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.006095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.006121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.006135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.006153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.006186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.015993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.016193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.016219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.016233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.016246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.016275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.026036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.026189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.026214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.026228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.026242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.026271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.036055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.036258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.036284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.036298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.036311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.036341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.046063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.046226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.046252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.046266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.046279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.046308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.056126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.056348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.056374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.056388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.056400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.056431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.066166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.066326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.066352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.066366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.066379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.066409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.076152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.076301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.076326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.076340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.076353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.076382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.086232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.086383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.086409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.086423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.086435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.086479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.096235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.096397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.096422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.096437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.096456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.096486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.106259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.106450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.106476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.106490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.106503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.106546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.116286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.116440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.116466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.116480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.116493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.116523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.126297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.126449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.126475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.126490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.126502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.126546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.252 qpair failed and we were unable to recover it. 00:30:07.252 [2024-07-14 04:02:26.136454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.252 [2024-07-14 04:02:26.136626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.252 [2024-07-14 04:02:26.136652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.252 [2024-07-14 04:02:26.136665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.252 [2024-07-14 04:02:26.136679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.252 [2024-07-14 04:02:26.136708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.253 qpair failed and we were unable to recover it. 00:30:07.253 [2024-07-14 04:02:26.146355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.253 [2024-07-14 04:02:26.146522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.253 [2024-07-14 04:02:26.146549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.253 [2024-07-14 04:02:26.146563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.253 [2024-07-14 04:02:26.146576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.253 [2024-07-14 04:02:26.146607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.253 qpair failed and we were unable to recover it. 00:30:07.253 [2024-07-14 04:02:26.156449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.253 [2024-07-14 04:02:26.156630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.253 [2024-07-14 04:02:26.156656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.253 [2024-07-14 04:02:26.156671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.253 [2024-07-14 04:02:26.156683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.253 [2024-07-14 04:02:26.156712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.253 qpair failed and we were unable to recover it. 00:30:07.253 [2024-07-14 04:02:26.166433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.253 [2024-07-14 04:02:26.166590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.253 [2024-07-14 04:02:26.166616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.253 [2024-07-14 04:02:26.166630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.253 [2024-07-14 04:02:26.166643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.253 [2024-07-14 04:02:26.166674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.253 qpair failed and we were unable to recover it. 00:30:07.253 [2024-07-14 04:02:26.176498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.253 [2024-07-14 04:02:26.176690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.253 [2024-07-14 04:02:26.176715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.253 [2024-07-14 04:02:26.176729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.253 [2024-07-14 04:02:26.176742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.253 [2024-07-14 04:02:26.176772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.253 qpair failed and we were unable to recover it. 00:30:07.253 [2024-07-14 04:02:26.186549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.253 [2024-07-14 04:02:26.186751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.253 [2024-07-14 04:02:26.186777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.253 [2024-07-14 04:02:26.186798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.253 [2024-07-14 04:02:26.186811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.253 [2024-07-14 04:02:26.186842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.253 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.196587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.196748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.196775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.196789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.513 [2024-07-14 04:02:26.196802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.513 [2024-07-14 04:02:26.196831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.513 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.206646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.206833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.206859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.206881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.513 [2024-07-14 04:02:26.206894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.513 [2024-07-14 04:02:26.206925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.513 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.216582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.216771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.216796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.216810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.513 [2024-07-14 04:02:26.216823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.513 [2024-07-14 04:02:26.216853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.513 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.226586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.226735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.226761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.226774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.513 [2024-07-14 04:02:26.226787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.513 [2024-07-14 04:02:26.226815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.513 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.236661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.236863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.236898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.236913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.513 [2024-07-14 04:02:26.236926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.513 [2024-07-14 04:02:26.236955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.513 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.246635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.246787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.246813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.246827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.513 [2024-07-14 04:02:26.246840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.513 [2024-07-14 04:02:26.246878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.513 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.256728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.256914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.256939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.256952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.513 [2024-07-14 04:02:26.256965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.513 [2024-07-14 04:02:26.256996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.513 qpair failed and we were unable to recover it. 00:30:07.513 [2024-07-14 04:02:26.266750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.513 [2024-07-14 04:02:26.266972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.513 [2024-07-14 04:02:26.267000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.513 [2024-07-14 04:02:26.267014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.267030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.267062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.276742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.276893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.276920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.276941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.276955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.276986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.286760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.286911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.286938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.286953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.286965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.286995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.296829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.297035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.297060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.297074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.297087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.297117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.306820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.306994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.307021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.307035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.307048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.307078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.316881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.317035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.317060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.317075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.317088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.317117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.326906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.327061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.327087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.327101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.327114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.327144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.336951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.337115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.337140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.337154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.337167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.337196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.347003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.347208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.347233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.347247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.347260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.347290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.356972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.357127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.357153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.357167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.357180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.357210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.367021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.367225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.367256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.367272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.367285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.367314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.377049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.377202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.377228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.377242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.377255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.377285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.387058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.387213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.387239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.387253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.387266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.387296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.397176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.397331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.397357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.397371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.397384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.514 [2024-07-14 04:02:26.397413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.514 qpair failed and we were unable to recover it. 00:30:07.514 [2024-07-14 04:02:26.407101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.514 [2024-07-14 04:02:26.407259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.514 [2024-07-14 04:02:26.407285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.514 [2024-07-14 04:02:26.407299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.514 [2024-07-14 04:02:26.407311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.515 [2024-07-14 04:02:26.407347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.515 qpair failed and we were unable to recover it. 00:30:07.515 [2024-07-14 04:02:26.417184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.515 [2024-07-14 04:02:26.417340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.515 [2024-07-14 04:02:26.417365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.515 [2024-07-14 04:02:26.417379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.515 [2024-07-14 04:02:26.417392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.515 [2024-07-14 04:02:26.417423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.515 qpair failed and we were unable to recover it. 00:30:07.515 [2024-07-14 04:02:26.427195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.515 [2024-07-14 04:02:26.427371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.515 [2024-07-14 04:02:26.427398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.515 [2024-07-14 04:02:26.427412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.515 [2024-07-14 04:02:26.427425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.515 [2024-07-14 04:02:26.427454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.515 qpair failed and we were unable to recover it. 00:30:07.515 [2024-07-14 04:02:26.437231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.515 [2024-07-14 04:02:26.437380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.515 [2024-07-14 04:02:26.437406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.515 [2024-07-14 04:02:26.437420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.515 [2024-07-14 04:02:26.437433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.515 [2024-07-14 04:02:26.437462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.515 qpair failed and we were unable to recover it. 00:30:07.515 [2024-07-14 04:02:26.447281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.515 [2024-07-14 04:02:26.447434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.515 [2024-07-14 04:02:26.447462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.515 [2024-07-14 04:02:26.447476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.515 [2024-07-14 04:02:26.447493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.515 [2024-07-14 04:02:26.447525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.515 qpair failed and we were unable to recover it. 00:30:07.776 [2024-07-14 04:02:26.457264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.776 [2024-07-14 04:02:26.457439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.776 [2024-07-14 04:02:26.457471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.776 [2024-07-14 04:02:26.457486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.776 [2024-07-14 04:02:26.457499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.776 [2024-07-14 04:02:26.457529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.776 qpair failed and we were unable to recover it. 00:30:07.776 [2024-07-14 04:02:26.467362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.776 [2024-07-14 04:02:26.467535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.776 [2024-07-14 04:02:26.467561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.776 [2024-07-14 04:02:26.467576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.776 [2024-07-14 04:02:26.467589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.776 [2024-07-14 04:02:26.467618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.776 qpair failed and we were unable to recover it. 00:30:07.776 [2024-07-14 04:02:26.477408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.776 [2024-07-14 04:02:26.477559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.477585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.477600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.477613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.477642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.487394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.487582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.487608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.487623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.487636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.487666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.497450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.497667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.497693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.497707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.497720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.497756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.507445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.507619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.507645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.507659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.507672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.507703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.517440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.517599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.517625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.517639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.517652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.517681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.527538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.527689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.527718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.527733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.527746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.527777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.537523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.537678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.537705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.537719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.537732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.537762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.547546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.547702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.547734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.547750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.547763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.547792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.557546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.557701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.557726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.557740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.557753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.557783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.567664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.567816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.567842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.567856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.567879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.567912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.577669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.577833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.577859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.577883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.577897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.577927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.587665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.587873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.587898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.587913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.587932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.587962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.597646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.597793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.597819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.597833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.597846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.597882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.607728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.607880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.607906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.607919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.777 [2024-07-14 04:02:26.607932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.777 [2024-07-14 04:02:26.607963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.777 qpair failed and we were unable to recover it. 00:30:07.777 [2024-07-14 04:02:26.617720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.777 [2024-07-14 04:02:26.617928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.777 [2024-07-14 04:02:26.617953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.777 [2024-07-14 04:02:26.617967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.617980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.618010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.627849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.628034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.628060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.628075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.628088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.628117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.637758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.637924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.637950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.637964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.637977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.638007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.647803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.648007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.648035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.648054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.648068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.648098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.657843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.658003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.658030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.658044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.658057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.658087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.667864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.668038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.668064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.668078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.668092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.668121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.677891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.678052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.678078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.678099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.678113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.678153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.687934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.688088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.688114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.688129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.688142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.688183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.697955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.698129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.698155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.698169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.698182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.698212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:07.778 [2024-07-14 04:02:26.708061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.778 [2024-07-14 04:02:26.708249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.778 [2024-07-14 04:02:26.708277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.778 [2024-07-14 04:02:26.708291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.778 [2024-07-14 04:02:26.708304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:07.778 [2024-07-14 04:02:26.708335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.778 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.718125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.718328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.718356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.718371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.718384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.718415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.728055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.728252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.728278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.728292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.728305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.728334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.738063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.738222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.738248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.738262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.738275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.738304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.748081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.748240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.748267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.748281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.748294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.748323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.758216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.758387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.758413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.758427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.758440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.758470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.768171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.768334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.768360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.768379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.768393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.768435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.778224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.778427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.778456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.778471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.778484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.778514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.788237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.788405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.788431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.788445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.788457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.788486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.798237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.798389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.798414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.798428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.798441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.798472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.808305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.808455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.808480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.808495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.808507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.808550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.818337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.818496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.818522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.818536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.818549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.818579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.828325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.828472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.828497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.828511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.828524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.828554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.838339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.838502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.838526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.038 [2024-07-14 04:02:26.838539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.038 [2024-07-14 04:02:26.838551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.038 [2024-07-14 04:02:26.838580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.038 qpair failed and we were unable to recover it. 00:30:08.038 [2024-07-14 04:02:26.848434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.038 [2024-07-14 04:02:26.848590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.038 [2024-07-14 04:02:26.848615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.848630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.848642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.848673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.858444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.858596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.858626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.858641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.858654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.858684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.868440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.868609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.868636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.868651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.868664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.868695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.878496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.878697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.878723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.878737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.878750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.878780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.888525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.888679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.888705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.888719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.888732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.888762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.898584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.898775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.898801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.898815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.898828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.898882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.908568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.908723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.908749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.908763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.908776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.908807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.918575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.918740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.918766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.918780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.918793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.918834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.928669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.928862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.928896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.928911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.928924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.928956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.938636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.938786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.938811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.938825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.938838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.938874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.948667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.948824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.948855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.948879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.948894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.948924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.958701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.958893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.958919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.958934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.958949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.958978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.039 [2024-07-14 04:02:26.968794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.039 [2024-07-14 04:02:26.968951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.039 [2024-07-14 04:02:26.968977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.039 [2024-07-14 04:02:26.968991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.039 [2024-07-14 04:02:26.969005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.039 [2024-07-14 04:02:26.969035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.039 qpair failed and we were unable to recover it. 00:30:08.298 [2024-07-14 04:02:26.978799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.298 [2024-07-14 04:02:26.978986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.298 [2024-07-14 04:02:26.979013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.298 [2024-07-14 04:02:26.979027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.298 [2024-07-14 04:02:26.979040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.298 [2024-07-14 04:02:26.979071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.298 qpair failed and we were unable to recover it. 00:30:08.298 [2024-07-14 04:02:26.988792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.298 [2024-07-14 04:02:26.988997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.298 [2024-07-14 04:02:26.989025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.298 [2024-07-14 04:02:26.989039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.298 [2024-07-14 04:02:26.989053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.298 [2024-07-14 04:02:26.989089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.298 qpair failed and we were unable to recover it. 00:30:08.298 [2024-07-14 04:02:26.998812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.298 [2024-07-14 04:02:26.998978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.298 [2024-07-14 04:02:26.999005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.298 [2024-07-14 04:02:26.999019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.298 [2024-07-14 04:02:26.999032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.298 [2024-07-14 04:02:26.999074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.298 qpair failed and we were unable to recover it. 00:30:08.298 [2024-07-14 04:02:27.008858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.009033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.009060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.009074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.009087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.009117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.018933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.019109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.019135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.019150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.019163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.019193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.028878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.029035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.029061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.029076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.029090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.029121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.039008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.039184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.039215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.039231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.039245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.039275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.048933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.049091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.049116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.049130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.049143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.049172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.058982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.059184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.059209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.059223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.059236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.059267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.068988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.069154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.069179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.069193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.069206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.069236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.079102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.079283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.079309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.079322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.079341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.079372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.089053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.089214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.089239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.089253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.089265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.089295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.099246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.099404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.099430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.099444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.099457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.099488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.109193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.109365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.109391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.109405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.109418] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.109447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.119178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.119329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.119355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.119369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.119382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.119411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.129228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.129423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.129449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.129463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.129476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.129518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.139242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.139399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.139426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.139440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.299 [2024-07-14 04:02:27.139453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.299 [2024-07-14 04:02:27.139482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.299 qpair failed and we were unable to recover it. 00:30:08.299 [2024-07-14 04:02:27.149244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.299 [2024-07-14 04:02:27.149409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.299 [2024-07-14 04:02:27.149436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.299 [2024-07-14 04:02:27.149450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.149463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.149492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.159254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.159414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.159440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.159453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.159466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.159496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.169337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.169529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.169555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.169569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.169588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.169618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.179379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.179537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.179563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.179577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.179590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.179620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.189436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.189587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.189613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.189627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.189641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.189670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.199372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.199521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.199547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.199561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.199574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.199603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.209423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.209577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.209602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.209616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.209628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.209658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.219431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.219599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.219624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.219639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.219651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.219683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.300 [2024-07-14 04:02:27.229476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.300 [2024-07-14 04:02:27.229633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.300 [2024-07-14 04:02:27.229659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.300 [2024-07-14 04:02:27.229673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.300 [2024-07-14 04:02:27.229686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.300 [2024-07-14 04:02:27.229716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.300 qpair failed and we were unable to recover it. 00:30:08.559 [2024-07-14 04:02:27.239474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.559 [2024-07-14 04:02:27.239627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.559 [2024-07-14 04:02:27.239653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.559 [2024-07-14 04:02:27.239668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.559 [2024-07-14 04:02:27.239681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.559 [2024-07-14 04:02:27.239711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.559 qpair failed and we were unable to recover it. 00:30:08.559 [2024-07-14 04:02:27.249507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.559 [2024-07-14 04:02:27.249661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.559 [2024-07-14 04:02:27.249688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.559 [2024-07-14 04:02:27.249702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.559 [2024-07-14 04:02:27.249715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:08.559 [2024-07-14 04:02:27.249746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.559 qpair failed and we were unable to recover it. 00:30:08.559 [2024-07-14 04:02:27.259566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.559 [2024-07-14 04:02:27.259722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.559 [2024-07-14 04:02:27.259754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.559 [2024-07-14 04:02:27.259776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.259791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.259823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.269579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.269738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.269766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.269781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.269794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.269824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.279632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.279787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.279814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.279829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.279843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.279881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.289677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.289854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.289892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.289908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.289921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.289952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.299683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.299836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.299862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.299887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.299901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.299933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.309824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.309979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.310007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.310022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.310035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.310078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.319710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.319873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.319900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.319915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.319928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.319959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.329796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.329983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.330010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.330025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.330037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.330068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.339800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.339975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.340002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.340017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.340029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.340061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.349793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.349956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.349983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.350004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.350018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.350061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.359843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.360019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.360046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.360061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.360074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.360104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.369940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.370093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.560 [2024-07-14 04:02:27.370119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.560 [2024-07-14 04:02:27.370133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.560 [2024-07-14 04:02:27.370145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.560 [2024-07-14 04:02:27.370176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.560 qpair failed and we were unable to recover it. 00:30:08.560 [2024-07-14 04:02:27.379909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.560 [2024-07-14 04:02:27.380067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.380094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.380108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.380121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.380151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.389947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.390102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.390129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.390144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.390157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.390188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.399962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.400112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.400138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.400153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.400166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.400198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.409991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.410150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.410178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.410192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.410205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.410235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.420018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.420174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.420199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.420214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.420228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.420259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.430042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.430193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.430219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.430234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.430247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.430276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.440103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.440253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.440284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.440299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.440312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.440342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.450178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.450330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.450356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.450371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.450384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.450413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.460133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.460302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.460328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.460343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.460355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.460385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.470258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.470414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.470440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.470454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.470468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.470497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.480185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.480349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.480375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.480390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.480403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.480443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.561 [2024-07-14 04:02:27.490224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.561 [2024-07-14 04:02:27.490380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.561 [2024-07-14 04:02:27.490405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.561 [2024-07-14 04:02:27.490420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.561 [2024-07-14 04:02:27.490432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.561 [2024-07-14 04:02:27.490461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.561 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-14 04:02:27.500260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.822 [2024-07-14 04:02:27.500455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.822 [2024-07-14 04:02:27.500480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.822 [2024-07-14 04:02:27.500495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.822 [2024-07-14 04:02:27.500509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.822 [2024-07-14 04:02:27.500538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.822 qpair failed and we were unable to recover it. 00:30:08.822 [2024-07-14 04:02:27.510300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.822 [2024-07-14 04:02:27.510452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.822 [2024-07-14 04:02:27.510479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.510495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.510510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.510541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.520311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.520513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.520540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.520555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.520568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.520598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.530310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.530459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.530491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.530506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.530519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.530549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.540366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.540525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.540551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.540566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.540580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.540611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.550400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.550559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.550585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.550600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.550613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.550645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.560382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.560542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.560568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.560583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.560596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.560627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.570542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.570701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.570727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.570742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.570761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.570804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.580504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.580672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.580701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.580716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.580729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.580759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.590526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.590679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.590706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.590721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.590733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.590764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.600529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.600684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.600712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.600727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.600740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.600781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.610551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.610698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.823 [2024-07-14 04:02:27.610725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.823 [2024-07-14 04:02:27.610739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.823 [2024-07-14 04:02:27.610752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.823 [2024-07-14 04:02:27.610783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.823 qpair failed and we were unable to recover it. 00:30:08.823 [2024-07-14 04:02:27.620581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.823 [2024-07-14 04:02:27.620746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.620772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.620787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.620800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.620830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.630595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.630747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.630773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.630788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.630801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.630842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.640631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.640827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.640853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.640875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.640890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.640920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.650686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.650836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.650862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.650888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.650902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.650946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.660716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.660908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.660933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.660947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.660965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.660997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.670816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.670971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.670997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.671011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.671024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.671054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.680749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.680909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.680935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.680950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.680963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.680992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.690809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.690988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.691014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.691028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.691041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.691071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.700826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.701017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.701043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.701057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.701070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.701099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.710835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.711000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.711027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.711041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.711055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.711085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.720902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.721052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.721078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.721092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.721105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.824 [2024-07-14 04:02:27.721136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.824 qpair failed and we were unable to recover it. 00:30:08.824 [2024-07-14 04:02:27.730917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.824 [2024-07-14 04:02:27.731065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.824 [2024-07-14 04:02:27.731091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.824 [2024-07-14 04:02:27.731105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.824 [2024-07-14 04:02:27.731118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.825 [2024-07-14 04:02:27.731148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-07-14 04:02:27.740940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.825 [2024-07-14 04:02:27.741096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.825 [2024-07-14 04:02:27.741122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.825 [2024-07-14 04:02:27.741136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.825 [2024-07-14 04:02:27.741149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.825 [2024-07-14 04:02:27.741179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-07-14 04:02:27.751011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.825 [2024-07-14 04:02:27.751209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.825 [2024-07-14 04:02:27.751236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.825 [2024-07-14 04:02:27.751257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.825 [2024-07-14 04:02:27.751270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:08.825 [2024-07-14 04:02:27.751300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.825 qpair failed and we were unable to recover it. 00:30:08.825 [2024-07-14 04:02:27.761009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.086 [2024-07-14 04:02:27.761167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.086 [2024-07-14 04:02:27.761194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.086 [2024-07-14 04:02:27.761211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.086 [2024-07-14 04:02:27.761224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.086 [2024-07-14 04:02:27.761255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-14 04:02:27.771034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.086 [2024-07-14 04:02:27.771187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.086 [2024-07-14 04:02:27.771214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.086 [2024-07-14 04:02:27.771228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.086 [2024-07-14 04:02:27.771240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.086 [2024-07-14 04:02:27.771269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-14 04:02:27.781092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.086 [2024-07-14 04:02:27.781252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.086 [2024-07-14 04:02:27.781278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.086 [2024-07-14 04:02:27.781293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.086 [2024-07-14 04:02:27.781306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.086 [2024-07-14 04:02:27.781337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-14 04:02:27.791079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.086 [2024-07-14 04:02:27.791239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.086 [2024-07-14 04:02:27.791266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.086 [2024-07-14 04:02:27.791280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.086 [2024-07-14 04:02:27.791292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.086 [2024-07-14 04:02:27.791321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-14 04:02:27.801094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.086 [2024-07-14 04:02:27.801243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.086 [2024-07-14 04:02:27.801269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.086 [2024-07-14 04:02:27.801284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.086 [2024-07-14 04:02:27.801297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.086 [2024-07-14 04:02:27.801326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.811157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.811352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.811379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.811393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.811406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.811435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.821229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.821423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.821449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.821463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.821475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.821505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.831219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.831370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.831396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.831411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.831424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.831453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.841274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.841430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.841455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.841475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.841488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.841518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.851254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.851423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.851448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.851463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.851475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.851505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.861346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.861509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.861535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.861549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.861562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.861592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.871348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.871504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.871530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.871545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.871557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.871587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.881331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.881481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.881506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.881522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.881537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.881568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.891418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.891585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.891610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.891625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.891638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.891668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.901427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.901585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.901610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.901625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.901637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.087 [2024-07-14 04:02:27.901667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-14 04:02:27.911461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.087 [2024-07-14 04:02:27.911617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.087 [2024-07-14 04:02:27.911643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.087 [2024-07-14 04:02:27.911658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.087 [2024-07-14 04:02:27.911671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.911700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.921498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.921692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.921718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.921733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.921745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.921776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.931496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.931680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.931712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.931728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.931740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.931770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.941548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.941721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.941747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.941762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.941774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.941803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.951591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.951763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.951788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.951803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.951816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.951845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.961615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.961770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.961795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.961810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.961823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.961851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.971612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.971763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.971789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.971804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.971817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.971853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.981638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.981790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.981816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.981830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.981843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.981878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:27.991702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:27.991858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:27.991891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:27.991906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:27.991919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:27.991949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:28.001699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:28.001897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:28.001924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:28.001938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:28.001951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:28.001981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:28.011752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:28.011936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:28.011962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.088 [2024-07-14 04:02:28.011976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.088 [2024-07-14 04:02:28.011989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.088 [2024-07-14 04:02:28.012020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-14 04:02:28.021747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.088 [2024-07-14 04:02:28.021916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.088 [2024-07-14 04:02:28.021948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.089 [2024-07-14 04:02:28.021968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.089 [2024-07-14 04:02:28.021981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.089 [2024-07-14 04:02:28.022011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.031842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.032018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.032045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.032060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.032072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.032101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.041800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.041956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.041982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.041996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.042009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.042040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.051831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.051988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.052014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.052029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.052042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.052073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.061862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.062034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.062060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.062075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.062088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.062123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.071913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.072067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.072094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.072108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.072121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.072152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.081933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.082126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.082153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.082167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.082180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.082211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.091959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.092146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.092172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.092188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.092201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.092230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.102098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.102254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.102280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.102294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.102307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.102338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.112003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.112208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.112243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.112259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.112272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.112302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.122046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.122272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.122299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.122315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.122332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.350 [2024-07-14 04:02:28.122366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.350 qpair failed and we were unable to recover it. 00:30:09.350 [2024-07-14 04:02:28.132087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.350 [2024-07-14 04:02:28.132240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.350 [2024-07-14 04:02:28.132266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.350 [2024-07-14 04:02:28.132281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.350 [2024-07-14 04:02:28.132295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.132324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.142121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.142276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.142303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.142317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.142331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.142360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.152164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.152355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.152383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.152398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.152421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.152455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.162249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.162404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.162437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.162452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.162465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.162495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.172249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.172442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.172468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.172482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.172494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.172525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.182250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.182448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.182475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.182489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.182502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.182531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.192229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.192380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.192406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.192420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.192432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.192462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.202298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.202454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.202480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.202494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.202507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.202536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.212327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.212496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.212522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.212536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.212549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.212579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.222339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.222540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.222565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.222579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.222592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.222621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.232383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.232530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.232556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.232570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.232583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.232612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.242378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.242527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.242553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.242573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.242588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.242618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.252418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.252562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.252588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.252603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.252616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.252657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.262464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.262631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.262657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.262671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.262684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.262714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.272518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.351 [2024-07-14 04:02:28.272696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.351 [2024-07-14 04:02:28.272724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.351 [2024-07-14 04:02:28.272740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.351 [2024-07-14 04:02:28.272753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.351 [2024-07-14 04:02:28.272785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.351 qpair failed and we were unable to recover it. 00:30:09.351 [2024-07-14 04:02:28.282525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.352 [2024-07-14 04:02:28.282708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.352 [2024-07-14 04:02:28.282736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.352 [2024-07-14 04:02:28.282751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.352 [2024-07-14 04:02:28.282768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.352 [2024-07-14 04:02:28.282800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.352 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.292645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.292804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.292831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.292845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.292858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.292911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.302582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.302745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.302772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.302791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.302804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.302836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.312654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.312852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.312887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.312903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.312916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.312946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.322624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.322777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.322803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.322817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.322829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.322860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.332691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.332842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.332876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.332899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.332914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.332944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.342765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.342982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.343008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.343023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.343036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.343066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.352742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.352946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.352973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.352988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.353001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.353032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.362758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.362930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.362959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.362974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.362987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.613 [2024-07-14 04:02:28.363018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.613 qpair failed and we were unable to recover it. 00:30:09.613 [2024-07-14 04:02:28.372800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.613 [2024-07-14 04:02:28.372952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.613 [2024-07-14 04:02:28.372978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.613 [2024-07-14 04:02:28.372993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.613 [2024-07-14 04:02:28.373006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.373036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.382828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.383014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.383042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.383056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.383069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.383099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.392848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.393017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.393043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.393057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.393070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.393102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.402877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.403062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.403090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.403105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.403123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.403154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.412920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.413076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.413102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.413117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.413130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.413161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.422930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.423084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.423114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.423130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.423143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.423173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.433009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.433220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.433247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.433261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.433274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.433303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.442969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.443125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.443151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.443165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.443178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.443208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.452997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.453150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.453176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.453191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.453203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.453233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.463120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.463271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.463297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.463311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.463325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.463372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.473110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.473281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.473307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.473322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.473334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.473364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.483072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.483224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.483250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.483264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.483277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.483307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.493133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.493285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.493311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.493326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.493339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.493368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.503148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.503302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.503327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.503342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.503355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.503384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.513200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.614 [2024-07-14 04:02:28.513369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.614 [2024-07-14 04:02:28.513400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.614 [2024-07-14 04:02:28.513415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.614 [2024-07-14 04:02:28.513428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.614 [2024-07-14 04:02:28.513457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.614 qpair failed and we were unable to recover it. 00:30:09.614 [2024-07-14 04:02:28.523208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.615 [2024-07-14 04:02:28.523359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.615 [2024-07-14 04:02:28.523385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.615 [2024-07-14 04:02:28.523399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.615 [2024-07-14 04:02:28.523412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.615 [2024-07-14 04:02:28.523442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-07-14 04:02:28.533225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.615 [2024-07-14 04:02:28.533380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.615 [2024-07-14 04:02:28.533406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.615 [2024-07-14 04:02:28.533420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.615 [2024-07-14 04:02:28.533433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.615 [2024-07-14 04:02:28.533462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.615 [2024-07-14 04:02:28.543286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.615 [2024-07-14 04:02:28.543443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.615 [2024-07-14 04:02:28.543469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.615 [2024-07-14 04:02:28.543484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.615 [2024-07-14 04:02:28.543497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.615 [2024-07-14 04:02:28.543527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.615 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.553289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.553444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.553470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.553485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.553498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.553533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.563372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.563528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.563554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.563569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.563582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.563611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.573452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.573603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.573629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.573643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.573656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.573685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.583410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.583598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.583624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.583644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.583658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.583690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.593409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.593571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.593597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.593611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.593624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.593656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.603485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.603675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.603707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.603723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.603736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.603765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.613484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.613683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.613709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.613723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.613736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.613766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.623498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.623655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.623680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.623695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.623708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.623737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.633554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.633746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.633772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.633787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.633800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.633830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.643580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.643739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.643765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.643780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.643798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.643829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.653590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.653738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.653764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.653778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.653791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.875 [2024-07-14 04:02:28.653820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-07-14 04:02:28.663731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.875 [2024-07-14 04:02:28.663892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.875 [2024-07-14 04:02:28.663918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.875 [2024-07-14 04:02:28.663932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.875 [2024-07-14 04:02:28.663945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.663988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.673641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.673811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.673838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.673852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.673871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.673914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.683672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.683819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.683846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.683860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.683881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.683913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.693696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.693881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.693908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.693923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.693936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.693967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.703743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.703908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.703935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.703950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.703963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.703993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.713753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.713915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.713941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.713956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.713969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.714010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.723784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.723949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.723976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.723991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.724007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.724037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.733806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.733978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.734004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.734019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.734037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.734068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.743837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.743997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.744022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.744037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.744050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.744080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.753905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.754065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.754092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.754107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.754120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.754149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.763939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.764103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.764131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.764146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.764159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.764192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.774007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.774166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.774196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.774211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.774225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.774256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.783975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.784173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.784200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.784216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.784229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.784259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.794098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.794310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.794337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.794352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.794364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.794394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-07-14 04:02:28.804013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.876 [2024-07-14 04:02:28.804168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.876 [2024-07-14 04:02:28.804193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.876 [2024-07-14 04:02:28.804208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.876 [2024-07-14 04:02:28.804221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:09.876 [2024-07-14 04:02:28.804250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:09.877 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.814039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.814199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.814228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.814242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.814255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.814285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.824136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.824307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.824334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.824355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.824369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.824399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.834209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.834360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.834387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.834401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.834415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.834444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.844142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.844293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.844318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.844331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.844344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.844372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.854195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.854350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.854377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.854391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.854405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.854434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.864202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.864360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.864387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.864402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.864414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.864443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.874232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.874387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.874413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.874427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.874441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.874472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.884334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.136 [2024-07-14 04:02:28.884528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.136 [2024-07-14 04:02:28.884555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.136 [2024-07-14 04:02:28.884569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.136 [2024-07-14 04:02:28.884582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.136 [2024-07-14 04:02:28.884612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.136 qpair failed and we were unable to recover it. 00:30:10.136 [2024-07-14 04:02:28.894284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.894435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.894461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.894476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.894488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.894518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.904351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.904510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.904536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.904551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.904564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.904594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.914362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.914569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.914595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.914616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.914630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.914662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.924412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.924563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.924588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.924602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.924616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.924646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.934436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.934634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.934660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.934674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.934687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.934716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.944478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.944633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.944659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.944674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.944687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.944716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.954463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.954615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.954641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.954655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.954669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.954698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.964574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.964724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.964750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.964765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.964778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.964807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.974571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.974731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.974756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.974771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.974784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.974816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.984549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.984705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.984731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.984745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.984758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.984788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:28.994579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:28.994731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:28.994758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:28.994772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:28.994785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:28.994827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:29.004626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:29.004819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:29.004850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:29.004873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:29.004888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:29.004920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:29.014660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:29.014811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:29.014837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:29.014851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:29.014873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:29.014905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:29.024750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:29.024912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.137 [2024-07-14 04:02:29.024939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.137 [2024-07-14 04:02:29.024959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.137 [2024-07-14 04:02:29.024973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.137 [2024-07-14 04:02:29.025004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.137 qpair failed and we were unable to recover it. 00:30:10.137 [2024-07-14 04:02:29.034696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.137 [2024-07-14 04:02:29.034857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.138 [2024-07-14 04:02:29.034891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.138 [2024-07-14 04:02:29.034906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.138 [2024-07-14 04:02:29.034919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.138 [2024-07-14 04:02:29.034949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.138 qpair failed and we were unable to recover it. 00:30:10.138 [2024-07-14 04:02:29.044740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.138 [2024-07-14 04:02:29.044947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.138 [2024-07-14 04:02:29.044974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.138 [2024-07-14 04:02:29.044988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.138 [2024-07-14 04:02:29.045001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.138 [2024-07-14 04:02:29.045050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.138 qpair failed and we were unable to recover it. 00:30:10.138 [2024-07-14 04:02:29.054765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.138 [2024-07-14 04:02:29.054931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.138 [2024-07-14 04:02:29.054959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.138 [2024-07-14 04:02:29.054978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.138 [2024-07-14 04:02:29.054992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.138 [2024-07-14 04:02:29.055023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.138 qpair failed and we were unable to recover it. 00:30:10.138 [2024-07-14 04:02:29.064830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.138 [2024-07-14 04:02:29.065021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.138 [2024-07-14 04:02:29.065048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.138 [2024-07-14 04:02:29.065062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.138 [2024-07-14 04:02:29.065076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.138 [2024-07-14 04:02:29.065105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.138 qpair failed and we were unable to recover it. 00:30:10.397 [2024-07-14 04:02:29.074818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.397 [2024-07-14 04:02:29.074969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.397 [2024-07-14 04:02:29.074996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.397 [2024-07-14 04:02:29.075010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.397 [2024-07-14 04:02:29.075024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.397 [2024-07-14 04:02:29.075054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.397 qpair failed and we were unable to recover it. 00:30:10.397 [2024-07-14 04:02:29.084853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.397 [2024-07-14 04:02:29.085007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.397 [2024-07-14 04:02:29.085033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.397 [2024-07-14 04:02:29.085047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.085061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.085091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.094935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.095122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.095154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.095169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.095182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.095212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.104924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.105090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.105117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.105131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.105147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.105177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.114987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.115136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.115162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.115176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.115190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.115219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.124946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.125098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.125124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.125138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.125151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.125181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.134978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.135143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.135169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.135183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.135202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.135232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.145004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.145160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.145185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.145199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.145213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.145243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.155060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.155218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.155244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.155259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.155272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.155302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.165064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.165211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.165237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.165251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.165264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.165293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.175103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.175252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.175278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.175292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.175305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.175335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.185135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.185299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.185325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.185339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.185352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.185382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.195166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.195322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.195348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.195363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.195377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.195419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.205207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.205357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.205383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.205397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.205410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.205439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.215224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.215390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.215416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.215430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.215443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.215474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.398 [2024-07-14 04:02:29.225240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.398 [2024-07-14 04:02:29.225397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.398 [2024-07-14 04:02:29.225425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.398 [2024-07-14 04:02:29.225439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.398 [2024-07-14 04:02:29.225459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.398 [2024-07-14 04:02:29.225492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.398 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.235293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.235515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.235553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.235567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.235580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.235612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.245351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.245560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.245586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.245601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.245614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.245644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.255372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.255523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.255550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.255565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.255578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.255608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.265388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.265551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.265578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.265592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.265605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.265634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.275413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.275604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.275631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.275646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.275659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.275689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.285415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.285583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.285609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.285624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.285637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.285666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.295487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.295644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.295681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.295695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.295709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.295738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.305599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.305757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.305783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.305797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.305810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.305841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.315538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.315695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.315721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.315742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.315757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.315786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.325621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.325778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.325803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.325817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.325830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.325859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.399 [2024-07-14 04:02:29.335555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.399 [2024-07-14 04:02:29.335713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.399 [2024-07-14 04:02:29.335739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.399 [2024-07-14 04:02:29.335754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.399 [2024-07-14 04:02:29.335767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.399 [2024-07-14 04:02:29.335796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.399 qpair failed and we were unable to recover it. 00:30:10.660 [2024-07-14 04:02:29.345609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.660 [2024-07-14 04:02:29.345771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.660 [2024-07-14 04:02:29.345797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.660 [2024-07-14 04:02:29.345811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.660 [2024-07-14 04:02:29.345825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.660 [2024-07-14 04:02:29.345854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.660 qpair failed and we were unable to recover it. 00:30:10.660 [2024-07-14 04:02:29.355617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.660 [2024-07-14 04:02:29.355775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.660 [2024-07-14 04:02:29.355802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.660 [2024-07-14 04:02:29.355817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.660 [2024-07-14 04:02:29.355830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.355861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.365654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.365806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.365833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.365847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.365860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.365902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.375656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.375808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.375833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.375848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.375861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.375900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.385703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.385859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.385891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.385906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.385919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.385950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.395715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.395873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.395900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.395914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.395927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.395957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.405753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.405914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.405940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.405962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.405975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.406005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.415807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.415969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.415995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.416010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.416023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.416053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.425812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.425992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.426019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.426033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.426046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.426076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.435852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.436017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.436043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.436058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.436071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.436100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.445859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.446021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.446046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.446061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.446074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.446103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.455914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.456065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.456090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.456105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.456118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.456147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.465934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.466089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.466115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.466130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.466143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.466172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.475962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.476129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.476155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.476169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.476182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.476211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.486084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.486266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.486292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.486306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.486319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.661 [2024-07-14 04:02:29.486361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.661 qpair failed and we were unable to recover it. 00:30:10.661 [2024-07-14 04:02:29.496070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.661 [2024-07-14 04:02:29.496227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.661 [2024-07-14 04:02:29.496258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.661 [2024-07-14 04:02:29.496273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.661 [2024-07-14 04:02:29.496287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.496316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.506051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.506215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.506242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.506256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.506269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.506298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.516079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.516260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.516286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.516300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.516313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.516342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.526141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.526318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.526344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.526359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.526372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.526402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.536138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.536293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.536319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.536333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.536346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.536385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.546268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.546450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.546476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.546491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.546504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.546534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.556197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.556356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.556382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.556396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.556409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.556438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.566309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.566485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.566511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.566526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.566539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.566580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.576314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.576471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.576497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.576511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.576524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.576553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.586306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.586463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.586495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.586510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.586522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.586552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.662 [2024-07-14 04:02:29.596300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.662 [2024-07-14 04:02:29.596454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.662 [2024-07-14 04:02:29.596480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.662 [2024-07-14 04:02:29.596494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.662 [2024-07-14 04:02:29.596507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.662 [2024-07-14 04:02:29.596536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.662 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.606362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.606518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.606543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.606559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.606572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.606602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.616380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.616549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.616575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.616589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.616602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.616633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.626498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.626673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.626699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.626715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.626728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.626775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.636551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.636734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.636762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.636777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.636790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.636822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.646476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.646626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.646652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.646667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.646681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.646710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.656489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.656644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.656670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.656685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.656698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.656727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.666536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.666693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.666719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.666734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.666747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.666777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.676622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.676781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.676807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.676822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.676834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.676871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.686603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.686756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.686782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.686796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.686809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.686840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.922 qpair failed and we were unable to recover it. 00:30:10.922 [2024-07-14 04:02:29.696610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.922 [2024-07-14 04:02:29.696786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.922 [2024-07-14 04:02:29.696812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.922 [2024-07-14 04:02:29.696826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.922 [2024-07-14 04:02:29.696840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.922 [2024-07-14 04:02:29.696876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.706634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.706790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.706815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.706829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.706842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.706878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.716661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.716812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.716838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.716852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.716880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.716915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.726684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.726840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.726872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.726890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.726905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.726935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.736739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.736899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.736926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.736941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.736954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.736985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.746863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.747047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.747072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.747087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.747100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.747129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.756774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.756932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.756958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.756972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.756985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.757016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.766792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.766946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.766973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.766988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.767000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.767032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.776816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.776976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.777002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.777016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.777029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.777060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.786890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.787057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.787082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.787097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.787110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.787139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.796910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.797068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.797093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.797108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.797120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.797149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.806907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.807058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.807084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.807105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.807119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.807149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.816933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.817085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.817111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.817125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.817138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.817168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.827000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.827174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.827199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.827213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.827226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.827255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.837006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.923 [2024-07-14 04:02:29.837171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.923 [2024-07-14 04:02:29.837197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.923 [2024-07-14 04:02:29.837212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.923 [2024-07-14 04:02:29.837226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.923 [2024-07-14 04:02:29.837256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.923 qpair failed and we were unable to recover it. 00:30:10.923 [2024-07-14 04:02:29.847121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.924 [2024-07-14 04:02:29.847275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.924 [2024-07-14 04:02:29.847300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.924 [2024-07-14 04:02:29.847318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.924 [2024-07-14 04:02:29.847330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.924 [2024-07-14 04:02:29.847371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.924 qpair failed and we were unable to recover it. 00:30:10.924 [2024-07-14 04:02:29.857085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.924 [2024-07-14 04:02:29.857254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.924 [2024-07-14 04:02:29.857281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.924 [2024-07-14 04:02:29.857296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.924 [2024-07-14 04:02:29.857310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:10.924 [2024-07-14 04:02:29.857339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.924 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.867098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.867310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.867336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.867350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.867364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.867394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.877222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.877387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.877413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.877427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.877440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.877481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.887187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.887336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.887362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.887377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.887390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.887419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.897192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.897339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.897366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.897387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.897402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.897433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.907214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.907372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.907398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.907413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.907427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.907458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.917238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.917390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.917416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.917430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.917444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.917474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.927271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.927417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.927443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.927457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.927471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.927500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.937293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.937463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.937489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.937504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.937517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.937547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.947326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.947518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.947544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.947559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.947572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.947601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.957385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.957537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.957563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.957578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.957591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.957620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.967412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.967580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.967606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.967621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.967634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.967663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.977415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.977575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.977600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.977615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.977628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f953c000b90 00:30:11.183 [2024-07-14 04:02:29.977658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.977936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb6100 is same with the state(5) to be set 00:30:11.183 [2024-07-14 04:02:29.987470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.987642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.987679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.987697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.987711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:11.183 [2024-07-14 04:02:29.987742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:29.997481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:29.997632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:29.997660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:29.997675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:29.997688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f954c000b90 00:30:11.183 [2024-07-14 04:02:29.997718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:30.007612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:30.007837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:30.007933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:30.007968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:30.007988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xea8610 00:30:11.183 [2024-07-14 04:02:30.008028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:30.017552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:30.017703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:30.017732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:30.017747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:30.017760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xea8610 00:30:11.183 [2024-07-14 04:02:30.017790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:30.027580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:30.027742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:30.027775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:30.027791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:30.027813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:11.183 [2024-07-14 04:02:30.027846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:30.037600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.183 [2024-07-14 04:02:30.037834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.183 [2024-07-14 04:02:30.037862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.183 [2024-07-14 04:02:30.037886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.183 [2024-07-14 04:02:30.037900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9544000b90 00:30:11.183 [2024-07-14 04:02:30.037931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.183 qpair failed and we were unable to recover it. 00:30:11.183 [2024-07-14 04:02:30.038314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb6100 (9): Bad file descriptor 00:30:11.183 Initializing NVMe Controllers 00:30:11.183 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:11.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:11.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:11.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:11.183 Initialization complete. Launching workers. 00:30:11.183 Starting thread on core 1 00:30:11.183 Starting thread on core 2 00:30:11.183 Starting thread on core 3 00:30:11.183 Starting thread on core 0 00:30:11.183 04:02:30 -- host/target_disconnect.sh@59 -- # sync 00:30:11.183 00:30:11.183 real 0m11.365s 00:30:11.183 user 0m19.795s 00:30:11.183 sys 0m5.645s 00:30:11.183 04:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.183 04:02:30 -- common/autotest_common.sh@10 -- # set +x 00:30:11.183 ************************************ 00:30:11.183 END TEST nvmf_target_disconnect_tc2 00:30:11.183 ************************************ 00:30:11.183 04:02:30 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:11.183 04:02:30 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:11.183 04:02:30 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:11.183 04:02:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:11.183 04:02:30 -- nvmf/common.sh@116 -- # sync 00:30:11.183 04:02:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:11.183 04:02:30 -- nvmf/common.sh@119 -- # set +e 00:30:11.183 04:02:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:11.183 04:02:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:11.183 rmmod nvme_tcp 00:30:11.183 rmmod nvme_fabrics 00:30:11.183 rmmod nvme_keyring 00:30:11.441 04:02:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:11.441 04:02:30 -- nvmf/common.sh@123 -- # set -e 00:30:11.441 04:02:30 -- nvmf/common.sh@124 -- # return 0 00:30:11.441 04:02:30 -- nvmf/common.sh@477 -- # '[' -n 2511639 ']' 00:30:11.441 04:02:30 -- nvmf/common.sh@478 -- # killprocess 2511639 00:30:11.441 04:02:30 -- common/autotest_common.sh@926 -- # '[' -z 2511639 ']' 00:30:11.441 04:02:30 -- common/autotest_common.sh@930 -- # kill -0 2511639 00:30:11.441 04:02:30 -- common/autotest_common.sh@931 -- # uname 00:30:11.441 04:02:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:11.441 04:02:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2511639 00:30:11.441 04:02:30 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:11.441 04:02:30 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:11.441 04:02:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2511639' 00:30:11.441 killing process with pid 2511639 00:30:11.441 04:02:30 -- common/autotest_common.sh@945 -- # kill 2511639 00:30:11.441 04:02:30 -- common/autotest_common.sh@950 -- # wait 2511639 00:30:11.701 04:02:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:11.701 04:02:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:11.701 04:02:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:11.701 04:02:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.702 04:02:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:11.702 04:02:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.702 04:02:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.702 04:02:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.607 04:02:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:13.607 00:30:13.607 real 0m16.149s 00:30:13.607 user 0m45.666s 00:30:13.607 sys 0m7.679s 00:30:13.607 04:02:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.607 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:13.607 ************************************ 00:30:13.607 END TEST nvmf_target_disconnect 00:30:13.607 ************************************ 00:30:13.607 04:02:32 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:13.607 04:02:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:13.607 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:13.607 04:02:32 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:13.607 00:30:13.607 real 22m27.016s 00:30:13.607 user 64m40.883s 00:30:13.607 sys 5m34.377s 00:30:13.607 04:02:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.607 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:13.607 ************************************ 00:30:13.608 END TEST nvmf_tcp 00:30:13.608 ************************************ 00:30:13.608 04:02:32 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:13.608 04:02:32 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:13.608 04:02:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:13.608 04:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.608 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:13.608 ************************************ 00:30:13.608 START TEST spdkcli_nvmf_tcp 00:30:13.608 ************************************ 00:30:13.608 04:02:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:13.866 * Looking for test storage... 00:30:13.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:13.866 04:02:32 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:13.866 04:02:32 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:13.866 04:02:32 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:13.866 04:02:32 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.866 04:02:32 -- nvmf/common.sh@7 -- # uname -s 00:30:13.866 04:02:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.866 04:02:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.866 04:02:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.866 04:02:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.866 04:02:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.866 04:02:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.866 04:02:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.866 04:02:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.866 04:02:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.866 04:02:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.866 04:02:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:13.866 04:02:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:13.866 04:02:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.866 04:02:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.866 04:02:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.866 04:02:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.866 04:02:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.867 04:02:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.867 04:02:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.867 04:02:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.867 04:02:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.867 04:02:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.867 04:02:32 -- paths/export.sh@5 -- # export PATH 00:30:13.867 04:02:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.867 04:02:32 -- nvmf/common.sh@46 -- # : 0 00:30:13.867 04:02:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:13.867 04:02:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:13.867 04:02:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:13.867 04:02:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.867 04:02:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.867 04:02:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:13.867 04:02:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:13.867 04:02:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:13.867 04:02:32 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:13.867 04:02:32 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:13.867 04:02:32 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:13.867 04:02:32 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:13.867 04:02:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:13.867 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:13.867 04:02:32 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:13.867 04:02:32 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2512745 00:30:13.867 04:02:32 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:13.867 04:02:32 -- spdkcli/common.sh@34 -- # waitforlisten 2512745 00:30:13.867 04:02:32 -- common/autotest_common.sh@819 -- # '[' -z 2512745 ']' 00:30:13.867 04:02:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.867 04:02:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:13.867 04:02:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.867 04:02:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:13.867 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:13.867 [2024-07-14 04:02:32.640636] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:13.867 [2024-07-14 04:02:32.640730] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512745 ] 00:30:13.867 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.867 [2024-07-14 04:02:32.725003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.125 [2024-07-14 04:02:32.829643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:14.125 [2024-07-14 04:02:32.829893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.125 [2024-07-14 04:02:32.829903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.125 04:02:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:14.125 04:02:32 -- common/autotest_common.sh@852 -- # return 0 00:30:14.125 04:02:32 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:14.125 04:02:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:14.125 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.125 04:02:32 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:14.125 04:02:32 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:14.125 04:02:32 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:14.125 04:02:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:14.125 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:30:14.125 04:02:32 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:14.125 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:14.125 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:14.125 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:14.125 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:14.125 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:14.125 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:14.125 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.125 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.125 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:14.125 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:14.125 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:14.125 ' 00:30:14.692 [2024-07-14 04:02:33.405031] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:17.225 [2024-07-14 04:02:35.586629] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.157 [2024-07-14 04:02:36.807006] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:20.688 [2024-07-14 04:02:39.066217] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:22.591 [2024-07-14 04:02:41.016795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:23.967 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:23.967 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:23.967 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:23.967 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:23.967 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:23.967 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:23.967 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:23.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.967 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:23.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:23.967 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:23.967 04:02:42 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:23.967 04:02:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:23.967 04:02:42 -- common/autotest_common.sh@10 -- # set +x 00:30:23.967 04:02:42 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:23.967 04:02:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:23.967 04:02:42 -- common/autotest_common.sh@10 -- # set +x 00:30:23.967 04:02:42 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:23.967 04:02:42 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:24.262 04:02:43 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:24.262 04:02:43 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:24.262 04:02:43 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:24.262 04:02:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:24.262 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:30:24.262 04:02:43 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:24.262 04:02:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:24.262 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:30:24.262 04:02:43 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:24.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:24.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:24.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:24.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:24.262 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:24.262 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:24.262 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:24.262 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:24.262 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:24.262 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:24.262 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:24.262 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:24.262 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:24.262 ' 00:30:29.528 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:29.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:29.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:29.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:29.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:29.529 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:29.529 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.529 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:29.529 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:29.529 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:29.529 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:29.529 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:29.529 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:29.529 04:02:48 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:29.529 04:02:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:29.529 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.529 04:02:48 -- spdkcli/nvmf.sh@90 -- # killprocess 2512745 00:30:29.529 04:02:48 -- common/autotest_common.sh@926 -- # '[' -z 2512745 ']' 00:30:29.529 04:02:48 -- common/autotest_common.sh@930 -- # kill -0 2512745 00:30:29.529 04:02:48 -- common/autotest_common.sh@931 -- # uname 00:30:29.529 04:02:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:29.529 04:02:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2512745 00:30:29.529 04:02:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:29.529 04:02:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:29.529 04:02:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2512745' 00:30:29.529 killing process with pid 2512745 00:30:29.529 04:02:48 -- common/autotest_common.sh@945 -- # kill 2512745 00:30:29.529 [2024-07-14 04:02:48.355546] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:29.529 04:02:48 -- common/autotest_common.sh@950 -- # wait 2512745 00:30:29.788 04:02:48 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:29.788 04:02:48 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:29.788 04:02:48 -- spdkcli/common.sh@13 -- # '[' -n 2512745 ']' 00:30:29.788 04:02:48 -- spdkcli/common.sh@14 -- # killprocess 2512745 00:30:29.788 04:02:48 -- common/autotest_common.sh@926 -- # '[' -z 2512745 ']' 00:30:29.788 04:02:48 -- common/autotest_common.sh@930 -- # kill -0 2512745 00:30:29.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2512745) - No such process 00:30:29.788 04:02:48 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2512745 is not found' 00:30:29.788 Process with pid 2512745 is not found 00:30:29.788 04:02:48 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:29.788 04:02:48 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:29.788 04:02:48 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:29.788 00:30:29.788 real 0m16.040s 00:30:29.788 user 0m33.820s 00:30:29.788 sys 0m0.875s 00:30:29.788 04:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.788 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.788 ************************************ 00:30:29.788 END TEST spdkcli_nvmf_tcp 00:30:29.788 ************************************ 00:30:29.788 04:02:48 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:29.788 04:02:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:29.788 04:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:29.788 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:30:29.788 ************************************ 00:30:29.788 START TEST nvmf_identify_passthru 00:30:29.788 ************************************ 00:30:29.788 04:02:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:29.788 * Looking for test storage... 00:30:29.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.788 04:02:48 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.788 04:02:48 -- nvmf/common.sh@7 -- # uname -s 00:30:29.788 04:02:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.788 04:02:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.788 04:02:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.788 04:02:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.788 04:02:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.788 04:02:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.788 04:02:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.788 04:02:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.788 04:02:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.788 04:02:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.788 04:02:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.788 04:02:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.788 04:02:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.788 04:02:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.788 04:02:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.788 04:02:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.788 04:02:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.788 04:02:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.788 04:02:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.788 04:02:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- paths/export.sh@5 -- # export PATH 00:30:29.788 04:02:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- nvmf/common.sh@46 -- # : 0 00:30:29.788 04:02:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:29.788 04:02:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:29.788 04:02:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:29.788 04:02:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.788 04:02:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.788 04:02:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:29.788 04:02:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:29.788 04:02:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:29.788 04:02:48 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.788 04:02:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.788 04:02:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.788 04:02:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.788 04:02:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- paths/export.sh@5 -- # export PATH 00:30:29.788 04:02:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.788 04:02:48 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:29.788 04:02:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:29.788 04:02:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.788 04:02:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:29.788 04:02:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:29.788 04:02:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:29.788 04:02:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.788 04:02:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:29.788 04:02:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.788 04:02:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:29.788 04:02:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:29.788 04:02:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:29.788 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:30:31.691 04:02:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:31.691 04:02:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:31.691 04:02:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:31.691 04:02:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:31.691 04:02:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:31.691 04:02:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:31.691 04:02:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:31.691 04:02:50 -- nvmf/common.sh@294 -- # net_devs=() 00:30:31.691 04:02:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:31.691 04:02:50 -- nvmf/common.sh@295 -- # e810=() 00:30:31.691 04:02:50 -- nvmf/common.sh@295 -- # local -ga e810 00:30:31.691 04:02:50 -- nvmf/common.sh@296 -- # x722=() 00:30:31.691 04:02:50 -- nvmf/common.sh@296 -- # local -ga x722 00:30:31.691 04:02:50 -- nvmf/common.sh@297 -- # mlx=() 00:30:31.691 04:02:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:31.691 04:02:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.691 04:02:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:31.691 04:02:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:31.691 04:02:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:31.692 04:02:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:31.692 04:02:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:31.692 04:02:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:31.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:31.692 04:02:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:31.692 04:02:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:31.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:31.692 04:02:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:31.692 04:02:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:31.692 04:02:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.692 04:02:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:31.692 04:02:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.692 04:02:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:31.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:31.692 04:02:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.692 04:02:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:31.692 04:02:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.692 04:02:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:31.692 04:02:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.692 04:02:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:31.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:31.692 04:02:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.692 04:02:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:31.692 04:02:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:31.692 04:02:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:31.692 04:02:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:31.692 04:02:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.692 04:02:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.692 04:02:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.692 04:02:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:31.692 04:02:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.692 04:02:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.692 04:02:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:31.692 04:02:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.692 04:02:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.692 04:02:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:31.692 04:02:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:31.692 04:02:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.692 04:02:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.692 04:02:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.692 04:02:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.692 04:02:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:31.692 04:02:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.952 04:02:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.952 04:02:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.952 04:02:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:31.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:30:31.952 00:30:31.952 --- 10.0.0.2 ping statistics --- 00:30:31.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.952 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:30:31.952 04:02:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:30:31.952 00:30:31.952 --- 10.0.0.1 ping statistics --- 00:30:31.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.952 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:30:31.952 04:02:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.952 04:02:50 -- nvmf/common.sh@410 -- # return 0 00:30:31.952 04:02:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:31.952 04:02:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.952 04:02:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:31.952 04:02:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:31.952 04:02:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.952 04:02:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:31.952 04:02:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:31.952 04:02:50 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:31.952 04:02:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:31.952 04:02:50 -- common/autotest_common.sh@10 -- # set +x 00:30:31.952 04:02:50 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:31.952 04:02:50 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:31.952 04:02:50 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:31.952 04:02:50 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:31.952 04:02:50 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:31.952 04:02:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:31.952 04:02:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:31.952 04:02:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:31.953 04:02:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:31.953 04:02:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:31.953 04:02:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:31.953 04:02:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:31.953 04:02:50 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:31.953 04:02:50 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:31.953 04:02:50 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:31.953 04:02:50 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:31.953 04:02:50 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:31.953 04:02:50 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:31.953 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.145 04:02:55 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:36.145 04:02:55 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:36.145 04:02:55 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:36.145 04:02:55 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:36.404 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.612 04:02:59 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:40.612 04:02:59 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:40.612 04:02:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:40.612 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:40.612 04:02:59 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:40.612 04:02:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:40.612 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:40.612 04:02:59 -- target/identify_passthru.sh@31 -- # nvmfpid=2517458 00:30:40.612 04:02:59 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:40.612 04:02:59 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.612 04:02:59 -- target/identify_passthru.sh@35 -- # waitforlisten 2517458 00:30:40.612 04:02:59 -- common/autotest_common.sh@819 -- # '[' -z 2517458 ']' 00:30:40.612 04:02:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.612 04:02:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:40.612 04:02:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.612 04:02:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:40.612 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:40.613 [2024-07-14 04:02:59.311224] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:40.613 [2024-07-14 04:02:59.311317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.613 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.613 [2024-07-14 04:02:59.375663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.613 [2024-07-14 04:02:59.462688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:40.613 [2024-07-14 04:02:59.462853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.613 [2024-07-14 04:02:59.462891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.613 [2024-07-14 04:02:59.462905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.613 [2024-07-14 04:02:59.462985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.613 [2024-07-14 04:02:59.463050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.613 [2024-07-14 04:02:59.463245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.613 [2024-07-14 04:02:59.463249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.613 04:02:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:40.613 04:02:59 -- common/autotest_common.sh@852 -- # return 0 00:30:40.613 04:02:59 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:40.613 04:02:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.613 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:40.613 INFO: Log level set to 20 00:30:40.613 INFO: Requests: 00:30:40.613 { 00:30:40.613 "jsonrpc": "2.0", 00:30:40.613 "method": "nvmf_set_config", 00:30:40.613 "id": 1, 00:30:40.613 "params": { 00:30:40.613 "admin_cmd_passthru": { 00:30:40.613 "identify_ctrlr": true 00:30:40.613 } 00:30:40.613 } 00:30:40.613 } 00:30:40.613 00:30:40.613 INFO: response: 00:30:40.613 { 00:30:40.613 "jsonrpc": "2.0", 00:30:40.613 "id": 1, 00:30:40.613 "result": true 00:30:40.613 } 00:30:40.613 00:30:40.613 04:02:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.613 04:02:59 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:40.613 04:02:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.613 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:40.613 INFO: Setting log level to 20 00:30:40.613 INFO: Setting log level to 20 00:30:40.613 INFO: Log level set to 20 00:30:40.613 INFO: Log level set to 20 00:30:40.613 INFO: Requests: 00:30:40.613 { 00:30:40.613 "jsonrpc": "2.0", 00:30:40.613 "method": "framework_start_init", 00:30:40.613 "id": 1 00:30:40.613 } 00:30:40.613 00:30:40.613 INFO: Requests: 00:30:40.613 { 00:30:40.613 "jsonrpc": "2.0", 00:30:40.613 "method": "framework_start_init", 00:30:40.613 "id": 1 00:30:40.613 } 00:30:40.613 00:30:40.871 [2024-07-14 04:02:59.618073] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:40.871 INFO: response: 00:30:40.871 { 00:30:40.871 "jsonrpc": "2.0", 00:30:40.871 "id": 1, 00:30:40.871 "result": true 00:30:40.871 } 00:30:40.871 00:30:40.871 INFO: response: 00:30:40.871 { 00:30:40.871 "jsonrpc": "2.0", 00:30:40.871 "id": 1, 00:30:40.871 "result": true 00:30:40.871 } 00:30:40.871 00:30:40.871 04:02:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.871 04:02:59 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.871 04:02:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.871 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 INFO: Setting log level to 40 00:30:40.871 INFO: Setting log level to 40 00:30:40.871 INFO: Setting log level to 40 00:30:40.871 [2024-07-14 04:02:59.627910] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.871 04:02:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:40.871 04:02:59 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:40.871 04:02:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:40.871 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:40.871 04:02:59 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:40.871 04:02:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:40.871 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:30:44.164 Nvme0n1 00:30:44.164 04:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.164 04:03:02 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:44.164 04:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.164 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:30:44.164 04:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.164 04:03:02 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:44.164 04:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.164 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:30:44.164 04:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.164 04:03:02 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.164 04:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.164 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:30:44.164 [2024-07-14 04:03:02.512153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.164 04:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.164 04:03:02 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:44.164 04:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.164 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:30:44.164 [2024-07-14 04:03:02.519827] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:44.164 [ 00:30:44.164 { 00:30:44.164 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:44.164 "subtype": "Discovery", 00:30:44.164 "listen_addresses": [], 00:30:44.164 "allow_any_host": true, 00:30:44.164 "hosts": [] 00:30:44.164 }, 00:30:44.164 { 00:30:44.164 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.164 "subtype": "NVMe", 00:30:44.164 "listen_addresses": [ 00:30:44.164 { 00:30:44.164 "transport": "TCP", 00:30:44.164 "trtype": "TCP", 00:30:44.164 "adrfam": "IPv4", 00:30:44.164 "traddr": "10.0.0.2", 00:30:44.164 "trsvcid": "4420" 00:30:44.164 } 00:30:44.164 ], 00:30:44.164 "allow_any_host": true, 00:30:44.164 "hosts": [], 00:30:44.164 "serial_number": "SPDK00000000000001", 00:30:44.164 "model_number": "SPDK bdev Controller", 00:30:44.164 "max_namespaces": 1, 00:30:44.164 "min_cntlid": 1, 00:30:44.164 "max_cntlid": 65519, 00:30:44.164 "namespaces": [ 00:30:44.164 { 00:30:44.164 "nsid": 1, 00:30:44.164 "bdev_name": "Nvme0n1", 00:30:44.164 "name": "Nvme0n1", 00:30:44.164 "nguid": "A1AA358B085D4EB49CC5A608B7EA1995", 00:30:44.164 "uuid": "a1aa358b-085d-4eb4-9cc5-a608b7ea1995" 00:30:44.164 } 00:30:44.164 ] 00:30:44.164 } 00:30:44.164 ] 00:30:44.164 04:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.164 04:03:02 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:44.164 04:03:02 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:44.164 04:03:02 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:44.164 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.164 04:03:02 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:44.164 04:03:02 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:44.164 04:03:02 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:44.164 04:03:02 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:44.164 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.164 04:03:02 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:44.164 04:03:02 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:44.164 04:03:02 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:44.164 04:03:02 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.164 04:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:44.164 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:30:44.164 04:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:44.164 04:03:02 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:44.164 04:03:02 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:44.164 04:03:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:44.164 04:03:02 -- nvmf/common.sh@116 -- # sync 00:30:44.164 04:03:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:44.164 04:03:02 -- nvmf/common.sh@119 -- # set +e 00:30:44.164 04:03:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:44.164 04:03:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:44.164 rmmod nvme_tcp 00:30:44.164 rmmod nvme_fabrics 00:30:44.164 rmmod nvme_keyring 00:30:44.164 04:03:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:44.165 04:03:02 -- nvmf/common.sh@123 -- # set -e 00:30:44.165 04:03:02 -- nvmf/common.sh@124 -- # return 0 00:30:44.165 04:03:02 -- nvmf/common.sh@477 -- # '[' -n 2517458 ']' 00:30:44.165 04:03:02 -- nvmf/common.sh@478 -- # killprocess 2517458 00:30:44.165 04:03:02 -- common/autotest_common.sh@926 -- # '[' -z 2517458 ']' 00:30:44.165 04:03:02 -- common/autotest_common.sh@930 -- # kill -0 2517458 00:30:44.165 04:03:02 -- common/autotest_common.sh@931 -- # uname 00:30:44.165 04:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:44.165 04:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2517458 00:30:44.165 04:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:44.165 04:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:44.165 04:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2517458' 00:30:44.165 killing process with pid 2517458 00:30:44.165 04:03:02 -- common/autotest_common.sh@945 -- # kill 2517458 00:30:44.165 [2024-07-14 04:03:02.948820] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:44.165 04:03:02 -- common/autotest_common.sh@950 -- # wait 2517458 00:30:46.077 04:03:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:46.077 04:03:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:46.077 04:03:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:46.077 04:03:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.077 04:03:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:46.077 04:03:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.077 04:03:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:46.077 04:03:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.982 04:03:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:47.982 00:30:47.982 real 0m17.951s 00:30:47.982 user 0m26.607s 00:30:47.982 sys 0m2.248s 00:30:47.982 04:03:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.982 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:30:47.982 ************************************ 00:30:47.982 END TEST nvmf_identify_passthru 00:30:47.982 ************************************ 00:30:47.982 04:03:06 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:47.982 04:03:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:47.982 04:03:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.982 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:30:47.982 ************************************ 00:30:47.982 START TEST nvmf_dif 00:30:47.982 ************************************ 00:30:47.982 04:03:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:47.982 * Looking for test storage... 00:30:47.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.982 04:03:06 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.982 04:03:06 -- nvmf/common.sh@7 -- # uname -s 00:30:47.982 04:03:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.982 04:03:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.982 04:03:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.982 04:03:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.982 04:03:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.982 04:03:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.982 04:03:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.982 04:03:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.982 04:03:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.982 04:03:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.982 04:03:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.982 04:03:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.982 04:03:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.982 04:03:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.982 04:03:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.982 04:03:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.982 04:03:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.982 04:03:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.982 04:03:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.982 04:03:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.982 04:03:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.983 04:03:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.983 04:03:06 -- paths/export.sh@5 -- # export PATH 00:30:47.983 04:03:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.983 04:03:06 -- nvmf/common.sh@46 -- # : 0 00:30:47.983 04:03:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:47.983 04:03:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:47.983 04:03:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:47.983 04:03:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.983 04:03:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.983 04:03:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:47.983 04:03:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:47.983 04:03:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:47.983 04:03:06 -- target/dif.sh@15 -- # NULL_META=16 00:30:47.983 04:03:06 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:47.983 04:03:06 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:47.983 04:03:06 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:47.983 04:03:06 -- target/dif.sh@135 -- # nvmftestinit 00:30:47.983 04:03:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:47.983 04:03:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.983 04:03:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:47.983 04:03:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:47.983 04:03:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:47.983 04:03:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.983 04:03:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:47.983 04:03:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.983 04:03:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:47.983 04:03:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:47.983 04:03:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:47.983 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:30:49.938 04:03:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:49.938 04:03:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:49.938 04:03:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:49.938 04:03:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:49.938 04:03:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:49.938 04:03:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:49.938 04:03:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:49.938 04:03:08 -- nvmf/common.sh@294 -- # net_devs=() 00:30:49.938 04:03:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:49.938 04:03:08 -- nvmf/common.sh@295 -- # e810=() 00:30:49.938 04:03:08 -- nvmf/common.sh@295 -- # local -ga e810 00:30:49.938 04:03:08 -- nvmf/common.sh@296 -- # x722=() 00:30:49.938 04:03:08 -- nvmf/common.sh@296 -- # local -ga x722 00:30:49.938 04:03:08 -- nvmf/common.sh@297 -- # mlx=() 00:30:49.938 04:03:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:49.938 04:03:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.938 04:03:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:49.938 04:03:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:49.938 04:03:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:49.938 04:03:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:49.938 04:03:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:49.938 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:49.938 04:03:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:49.938 04:03:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:49.938 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:49.938 04:03:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:49.938 04:03:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:49.938 04:03:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:49.938 04:03:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.938 04:03:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:49.938 04:03:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.938 04:03:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:49.938 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:49.938 04:03:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.938 04:03:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:49.938 04:03:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.938 04:03:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:49.938 04:03:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.938 04:03:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:49.938 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:49.938 04:03:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.939 04:03:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:49.939 04:03:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:49.939 04:03:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:49.939 04:03:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:49.939 04:03:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:49.939 04:03:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.939 04:03:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.939 04:03:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.939 04:03:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:49.939 04:03:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.939 04:03:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.939 04:03:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:49.939 04:03:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.939 04:03:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.939 04:03:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:49.939 04:03:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:49.939 04:03:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.939 04:03:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.939 04:03:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.939 04:03:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.939 04:03:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:49.939 04:03:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.939 04:03:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.939 04:03:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.939 04:03:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:49.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:30:49.939 00:30:49.939 --- 10.0.0.2 ping statistics --- 00:30:49.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.939 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:30:49.939 04:03:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:30:49.939 00:30:49.939 --- 10.0.0.1 ping statistics --- 00:30:49.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.939 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:30:49.939 04:03:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.939 04:03:08 -- nvmf/common.sh@410 -- # return 0 00:30:49.939 04:03:08 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:49.939 04:03:08 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:50.873 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:50.873 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:50.873 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:50.873 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:50.873 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:50.873 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:50.873 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:50.873 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:50.873 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:50.873 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:50.873 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:50.873 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:50.873 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:50.873 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:50.873 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:50.873 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:50.873 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:50.873 04:03:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.873 04:03:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:50.873 04:03:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:50.873 04:03:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.873 04:03:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:50.873 04:03:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:50.873 04:03:09 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:50.873 04:03:09 -- target/dif.sh@137 -- # nvmfappstart 00:30:50.873 04:03:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:50.873 04:03:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:50.873 04:03:09 -- common/autotest_common.sh@10 -- # set +x 00:30:50.873 04:03:09 -- nvmf/common.sh@469 -- # nvmfpid=2521269 00:30:50.873 04:03:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:50.873 04:03:09 -- nvmf/common.sh@470 -- # waitforlisten 2521269 00:30:50.874 04:03:09 -- common/autotest_common.sh@819 -- # '[' -z 2521269 ']' 00:30:50.874 04:03:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.874 04:03:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:50.874 04:03:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.874 04:03:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:50.874 04:03:09 -- common/autotest_common.sh@10 -- # set +x 00:30:50.874 [2024-07-14 04:03:09.774655] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:50.874 [2024-07-14 04:03:09.774726] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.874 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.132 [2024-07-14 04:03:09.839519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.132 [2024-07-14 04:03:09.922038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:51.132 [2024-07-14 04:03:09.922190] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.132 [2024-07-14 04:03:09.922207] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.132 [2024-07-14 04:03:09.922220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.132 [2024-07-14 04:03:09.922247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.071 04:03:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:52.071 04:03:10 -- common/autotest_common.sh@852 -- # return 0 00:30:52.071 04:03:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:52.071 04:03:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:52.071 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:30:52.071 04:03:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.071 04:03:10 -- target/dif.sh@139 -- # create_transport 00:30:52.071 04:03:10 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:52.071 04:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.071 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:30:52.071 [2024-07-14 04:03:10.724569] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.071 04:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.071 04:03:10 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:52.071 04:03:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:52.071 04:03:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:52.071 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:30:52.071 ************************************ 00:30:52.071 START TEST fio_dif_1_default 00:30:52.071 ************************************ 00:30:52.071 04:03:10 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:30:52.071 04:03:10 -- target/dif.sh@86 -- # create_subsystems 0 00:30:52.071 04:03:10 -- target/dif.sh@28 -- # local sub 00:30:52.071 04:03:10 -- target/dif.sh@30 -- # for sub in "$@" 00:30:52.071 04:03:10 -- target/dif.sh@31 -- # create_subsystem 0 00:30:52.071 04:03:10 -- target/dif.sh@18 -- # local sub_id=0 00:30:52.071 04:03:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:52.071 04:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.071 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:30:52.071 bdev_null0 00:30:52.071 04:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.071 04:03:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:52.071 04:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.071 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:30:52.071 04:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.071 04:03:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:52.071 04:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.071 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:30:52.071 04:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.071 04:03:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.071 04:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.071 04:03:10 -- common/autotest_common.sh@10 -- # set +x 00:30:52.071 [2024-07-14 04:03:10.760813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.071 04:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.071 04:03:10 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:52.071 04:03:10 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:52.071 04:03:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:52.071 04:03:10 -- nvmf/common.sh@520 -- # config=() 00:30:52.071 04:03:10 -- nvmf/common.sh@520 -- # local subsystem config 00:30:52.071 04:03:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:52.071 04:03:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.071 04:03:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:52.071 { 00:30:52.071 "params": { 00:30:52.071 "name": "Nvme$subsystem", 00:30:52.071 "trtype": "$TEST_TRANSPORT", 00:30:52.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.071 "adrfam": "ipv4", 00:30:52.071 "trsvcid": "$NVMF_PORT", 00:30:52.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.071 "hdgst": ${hdgst:-false}, 00:30:52.071 "ddgst": ${ddgst:-false} 00:30:52.071 }, 00:30:52.071 "method": "bdev_nvme_attach_controller" 00:30:52.071 } 00:30:52.071 EOF 00:30:52.071 )") 00:30:52.071 04:03:10 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.071 04:03:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:52.071 04:03:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:52.071 04:03:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:52.071 04:03:10 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.071 04:03:10 -- target/dif.sh@82 -- # gen_fio_conf 00:30:52.071 04:03:10 -- common/autotest_common.sh@1320 -- # shift 00:30:52.071 04:03:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:52.071 04:03:10 -- target/dif.sh@54 -- # local file 00:30:52.071 04:03:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.071 04:03:10 -- target/dif.sh@56 -- # cat 00:30:52.071 04:03:10 -- nvmf/common.sh@542 -- # cat 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:52.071 04:03:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:52.071 04:03:10 -- target/dif.sh@72 -- # (( file <= files )) 00:30:52.071 04:03:10 -- nvmf/common.sh@544 -- # jq . 00:30:52.071 04:03:10 -- nvmf/common.sh@545 -- # IFS=, 00:30:52.071 04:03:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:52.071 "params": { 00:30:52.071 "name": "Nvme0", 00:30:52.071 "trtype": "tcp", 00:30:52.071 "traddr": "10.0.0.2", 00:30:52.071 "adrfam": "ipv4", 00:30:52.071 "trsvcid": "4420", 00:30:52.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.071 "hdgst": false, 00:30:52.071 "ddgst": false 00:30:52.071 }, 00:30:52.071 "method": "bdev_nvme_attach_controller" 00:30:52.071 }' 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:52.071 04:03:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:52.071 04:03:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:52.071 04:03:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:52.071 04:03:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:52.071 04:03:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:52.071 04:03:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.330 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:52.330 fio-3.35 00:30:52.330 Starting 1 thread 00:30:52.330 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.896 [2024-07-14 04:03:11.597683] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:52.897 [2024-07-14 04:03:11.597748] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:02.857 00:31:02.857 filename0: (groupid=0, jobs=1): err= 0: pid=2521635: Sun Jul 14 04:03:21 2024 00:31:02.857 read: IOPS=186, BW=744KiB/s (762kB/s)(7472KiB/10040msec) 00:31:02.857 slat (nsec): min=4553, max=75399, avg=10520.97, stdev=4950.65 00:31:02.857 clat (usec): min=860, max=48232, avg=21465.04, stdev=20474.45 00:31:02.857 lat (usec): min=868, max=48266, avg=21475.56, stdev=20474.05 00:31:02.857 clat percentiles (usec): 00:31:02.857 | 1.00th=[ 881], 5.00th=[ 906], 10.00th=[ 922], 20.00th=[ 947], 00:31:02.857 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[41157], 60.00th=[41681], 00:31:02.857 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:02.857 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:31:02.857 | 99.99th=[47973] 00:31:02.857 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=745.60, stdev=27.66, samples=20 00:31:02.857 iops : min= 176, max= 192, avg=186.40, stdev= 6.92, samples=20 00:31:02.857 lat (usec) : 1000=44.06% 00:31:02.857 lat (msec) : 2=5.84%, 50=50.11% 00:31:02.857 cpu : usr=90.40%, sys=9.33%, ctx=16, majf=0, minf=260 00:31:02.857 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.857 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.857 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:02.857 00:31:02.857 Run status group 0 (all jobs): 00:31:02.857 READ: bw=744KiB/s (762kB/s), 744KiB/s-744KiB/s (762kB/s-762kB/s), io=7472KiB (7651kB), run=10040-10040msec 00:31:03.115 04:03:21 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:03.115 04:03:21 -- target/dif.sh@43 -- # local sub 00:31:03.115 04:03:21 -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.115 04:03:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:03.115 04:03:21 -- target/dif.sh@36 -- # local sub_id=0 00:31:03.115 04:03:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.115 04:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:21 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 04:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 04:03:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:03.115 04:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:21 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 04:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 00:31:03.115 real 0m11.240s 00:31:03.115 user 0m10.352s 00:31:03.115 sys 0m1.230s 00:31:03.115 04:03:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:03.115 04:03:21 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 ************************************ 00:31:03.115 END TEST fio_dif_1_default 00:31:03.115 ************************************ 00:31:03.115 04:03:21 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:03.115 04:03:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:03.115 04:03:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:03.115 04:03:21 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 ************************************ 00:31:03.115 START TEST fio_dif_1_multi_subsystems 00:31:03.115 ************************************ 00:31:03.115 04:03:22 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:03.115 04:03:22 -- target/dif.sh@92 -- # local files=1 00:31:03.115 04:03:22 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:03.115 04:03:22 -- target/dif.sh@28 -- # local sub 00:31:03.115 04:03:22 -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.115 04:03:22 -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.115 04:03:22 -- target/dif.sh@18 -- # local sub_id=0 00:31:03.115 04:03:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:03.115 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 bdev_null0 00:31:03.115 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 04:03:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.115 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 04:03:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.115 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 04:03:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.115 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 [2024-07-14 04:03:22.033060] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.115 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 04:03:22 -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.115 04:03:22 -- target/dif.sh@31 -- # create_subsystem 1 00:31:03.115 04:03:22 -- target/dif.sh@18 -- # local sub_id=1 00:31:03.115 04:03:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:03.115 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 bdev_null1 00:31:03.115 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 04:03:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:03.115 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.115 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.115 04:03:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:03.115 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.115 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.373 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.373 04:03:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.373 04:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.373 04:03:22 -- common/autotest_common.sh@10 -- # set +x 00:31:03.373 04:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.373 04:03:22 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:03.373 04:03:22 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:03.373 04:03:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:03.373 04:03:22 -- nvmf/common.sh@520 -- # config=() 00:31:03.373 04:03:22 -- nvmf/common.sh@520 -- # local subsystem config 00:31:03.373 04:03:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:03.373 04:03:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.373 04:03:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:03.373 { 00:31:03.374 "params": { 00:31:03.374 "name": "Nvme$subsystem", 00:31:03.374 "trtype": "$TEST_TRANSPORT", 00:31:03.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.374 "adrfam": "ipv4", 00:31:03.374 "trsvcid": "$NVMF_PORT", 00:31:03.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.374 "hdgst": ${hdgst:-false}, 00:31:03.374 "ddgst": ${ddgst:-false} 00:31:03.374 }, 00:31:03.374 "method": "bdev_nvme_attach_controller" 00:31:03.374 } 00:31:03.374 EOF 00:31:03.374 )") 00:31:03.374 04:03:22 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.374 04:03:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:03.374 04:03:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.374 04:03:22 -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.374 04:03:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:03.374 04:03:22 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.374 04:03:22 -- target/dif.sh@54 -- # local file 00:31:03.374 04:03:22 -- common/autotest_common.sh@1320 -- # shift 00:31:03.374 04:03:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:03.374 04:03:22 -- target/dif.sh@56 -- # cat 00:31:03.374 04:03:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.374 04:03:22 -- nvmf/common.sh@542 -- # cat 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:03.374 04:03:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:03.374 04:03:22 -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.374 04:03:22 -- target/dif.sh@73 -- # cat 00:31:03.374 04:03:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:03.374 04:03:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:03.374 { 00:31:03.374 "params": { 00:31:03.374 "name": "Nvme$subsystem", 00:31:03.374 "trtype": "$TEST_TRANSPORT", 00:31:03.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.374 "adrfam": "ipv4", 00:31:03.374 "trsvcid": "$NVMF_PORT", 00:31:03.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.374 "hdgst": ${hdgst:-false}, 00:31:03.374 "ddgst": ${ddgst:-false} 00:31:03.374 }, 00:31:03.374 "method": "bdev_nvme_attach_controller" 00:31:03.374 } 00:31:03.374 EOF 00:31:03.374 )") 00:31:03.374 04:03:22 -- nvmf/common.sh@542 -- # cat 00:31:03.374 04:03:22 -- target/dif.sh@72 -- # (( file++ )) 00:31:03.374 04:03:22 -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.374 04:03:22 -- nvmf/common.sh@544 -- # jq . 00:31:03.374 04:03:22 -- nvmf/common.sh@545 -- # IFS=, 00:31:03.374 04:03:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:03.374 "params": { 00:31:03.374 "name": "Nvme0", 00:31:03.374 "trtype": "tcp", 00:31:03.374 "traddr": "10.0.0.2", 00:31:03.374 "adrfam": "ipv4", 00:31:03.374 "trsvcid": "4420", 00:31:03.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.374 "hdgst": false, 00:31:03.374 "ddgst": false 00:31:03.374 }, 00:31:03.374 "method": "bdev_nvme_attach_controller" 00:31:03.374 },{ 00:31:03.374 "params": { 00:31:03.374 "name": "Nvme1", 00:31:03.374 "trtype": "tcp", 00:31:03.374 "traddr": "10.0.0.2", 00:31:03.374 "adrfam": "ipv4", 00:31:03.374 "trsvcid": "4420", 00:31:03.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:03.374 "hdgst": false, 00:31:03.374 "ddgst": false 00:31:03.374 }, 00:31:03.374 "method": "bdev_nvme_attach_controller" 00:31:03.374 }' 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:03.374 04:03:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:03.374 04:03:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:03.374 04:03:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:03.374 04:03:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:03.374 04:03:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.374 04:03:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:03.631 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:03.631 fio-3.35 00:31:03.631 Starting 2 threads 00:31:03.631 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.196 [2024-07-14 04:03:22.961095] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:04.196 [2024-07-14 04:03:22.961173] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:16.393 00:31:16.394 filename0: (groupid=0, jobs=1): err= 0: pid=2523074: Sun Jul 14 04:03:33 2024 00:31:16.394 read: IOPS=140, BW=563KiB/s (576kB/s)(5632KiB/10011msec) 00:31:16.394 slat (nsec): min=4080, max=35259, avg=13133.83, stdev=6687.92 00:31:16.394 clat (usec): min=887, max=46424, avg=28398.25, stdev=19196.28 00:31:16.394 lat (usec): min=897, max=46436, avg=28411.38, stdev=19194.08 00:31:16.394 clat percentiles (usec): 00:31:16.394 | 1.00th=[ 914], 5.00th=[ 938], 10.00th=[ 955], 20.00th=[ 996], 00:31:16.394 | 30.00th=[ 1287], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:31:16.394 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:16.394 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:31:16.394 | 99.99th=[46400] 00:31:16.394 bw ( KiB/s): min= 352, max= 768, per=43.03%, avg=561.60, stdev=179.67, samples=20 00:31:16.394 iops : min= 88, max= 192, avg=140.40, stdev=44.92, samples=20 00:31:16.394 lat (usec) : 1000=20.81% 00:31:16.394 lat (msec) : 2=12.14%, 50=67.05% 00:31:16.394 cpu : usr=97.36%, sys=2.33%, ctx=17, majf=0, minf=153 00:31:16.394 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.394 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.394 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:16.394 filename1: (groupid=0, jobs=1): err= 0: pid=2523075: Sun Jul 14 04:03:33 2024 00:31:16.394 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10015msec) 00:31:16.394 slat (nsec): min=4310, max=56232, avg=12563.60, stdev=7378.66 00:31:16.394 clat (usec): min=864, max=46131, avg=21543.48, stdev=20425.14 00:31:16.394 lat (usec): min=871, max=46157, avg=21556.04, stdev=20424.83 00:31:16.394 clat percentiles (usec): 00:31:16.394 | 1.00th=[ 889], 5.00th=[ 906], 10.00th=[ 922], 20.00th=[ 963], 00:31:16.394 | 30.00th=[ 996], 40.00th=[ 1037], 50.00th=[41157], 60.00th=[41681], 00:31:16.394 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:16.394 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:31:16.394 | 99.99th=[45876] 00:31:16.394 bw ( KiB/s): min= 672, max= 768, per=56.76%, avg=740.80, stdev=34.86, samples=20 00:31:16.394 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:31:16.394 lat (usec) : 1000=32.76% 00:31:16.394 lat (msec) : 2=17.03%, 50=50.22% 00:31:16.394 cpu : usr=96.80%, sys=2.91%, ctx=15, majf=0, minf=217 00:31:16.394 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.394 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.394 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:16.394 00:31:16.394 Run status group 0 (all jobs): 00:31:16.394 READ: bw=1304KiB/s (1335kB/s), 563KiB/s-741KiB/s (576kB/s-759kB/s), io=12.8MiB (13.4MB), run=10011-10015msec 00:31:16.394 04:03:33 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:16.394 04:03:33 -- target/dif.sh@43 -- # local sub 00:31:16.394 04:03:33 -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.394 04:03:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:16.394 04:03:33 -- target/dif.sh@36 -- # local sub_id=0 00:31:16.394 04:03:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 04:03:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 04:03:33 -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.394 04:03:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:16.394 04:03:33 -- target/dif.sh@36 -- # local sub_id=1 00:31:16.394 04:03:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 04:03:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 00:31:16.394 real 0m11.345s 00:31:16.394 user 0m20.825s 00:31:16.394 sys 0m0.825s 00:31:16.394 04:03:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 ************************************ 00:31:16.394 END TEST fio_dif_1_multi_subsystems 00:31:16.394 ************************************ 00:31:16.394 04:03:33 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:16.394 04:03:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:16.394 04:03:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 ************************************ 00:31:16.394 START TEST fio_dif_rand_params 00:31:16.394 ************************************ 00:31:16.394 04:03:33 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:16.394 04:03:33 -- target/dif.sh@100 -- # local NULL_DIF 00:31:16.394 04:03:33 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:16.394 04:03:33 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:16.394 04:03:33 -- target/dif.sh@103 -- # bs=128k 00:31:16.394 04:03:33 -- target/dif.sh@103 -- # numjobs=3 00:31:16.394 04:03:33 -- target/dif.sh@103 -- # iodepth=3 00:31:16.394 04:03:33 -- target/dif.sh@103 -- # runtime=5 00:31:16.394 04:03:33 -- target/dif.sh@105 -- # create_subsystems 0 00:31:16.394 04:03:33 -- target/dif.sh@28 -- # local sub 00:31:16.394 04:03:33 -- target/dif.sh@30 -- # for sub in "$@" 00:31:16.394 04:03:33 -- target/dif.sh@31 -- # create_subsystem 0 00:31:16.394 04:03:33 -- target/dif.sh@18 -- # local sub_id=0 00:31:16.394 04:03:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 bdev_null0 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 04:03:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 04:03:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 04:03:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:16.394 04:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.394 04:03:33 -- common/autotest_common.sh@10 -- # set +x 00:31:16.394 [2024-07-14 04:03:33.411749] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.394 04:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.394 04:03:33 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:16.394 04:03:33 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:16.394 04:03:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:16.394 04:03:33 -- nvmf/common.sh@520 -- # config=() 00:31:16.394 04:03:33 -- nvmf/common.sh@520 -- # local subsystem config 00:31:16.394 04:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:16.394 04:03:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.394 04:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:16.394 { 00:31:16.394 "params": { 00:31:16.394 "name": "Nvme$subsystem", 00:31:16.394 "trtype": "$TEST_TRANSPORT", 00:31:16.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.394 "adrfam": "ipv4", 00:31:16.394 "trsvcid": "$NVMF_PORT", 00:31:16.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.394 "hdgst": ${hdgst:-false}, 00:31:16.394 "ddgst": ${ddgst:-false} 00:31:16.394 }, 00:31:16.394 "method": "bdev_nvme_attach_controller" 00:31:16.394 } 00:31:16.394 EOF 00:31:16.394 )") 00:31:16.394 04:03:33 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.394 04:03:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:16.394 04:03:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.394 04:03:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:16.394 04:03:33 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.394 04:03:33 -- common/autotest_common.sh@1320 -- # shift 00:31:16.394 04:03:33 -- target/dif.sh@82 -- # gen_fio_conf 00:31:16.394 04:03:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:16.394 04:03:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.394 04:03:33 -- target/dif.sh@54 -- # local file 00:31:16.394 04:03:33 -- target/dif.sh@56 -- # cat 00:31:16.394 04:03:33 -- nvmf/common.sh@542 -- # cat 00:31:16.394 04:03:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.394 04:03:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:16.394 04:03:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:16.394 04:03:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:16.394 04:03:33 -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.394 04:03:33 -- nvmf/common.sh@544 -- # jq . 00:31:16.394 04:03:33 -- nvmf/common.sh@545 -- # IFS=, 00:31:16.394 04:03:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:16.394 "params": { 00:31:16.394 "name": "Nvme0", 00:31:16.394 "trtype": "tcp", 00:31:16.394 "traddr": "10.0.0.2", 00:31:16.394 "adrfam": "ipv4", 00:31:16.394 "trsvcid": "4420", 00:31:16.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:16.395 "hdgst": false, 00:31:16.395 "ddgst": false 00:31:16.395 }, 00:31:16.395 "method": "bdev_nvme_attach_controller" 00:31:16.395 }' 00:31:16.395 04:03:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:16.395 04:03:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:16.395 04:03:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.395 04:03:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.395 04:03:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:16.395 04:03:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:16.395 04:03:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:16.395 04:03:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:16.395 04:03:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:16.395 04:03:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.395 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:16.395 ... 00:31:16.395 fio-3.35 00:31:16.395 Starting 3 threads 00:31:16.395 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.395 [2024-07-14 04:03:34.245268] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:16.395 [2024-07-14 04:03:34.245339] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:20.586 00:31:20.586 filename0: (groupid=0, jobs=1): err= 0: pid=2524522: Sun Jul 14 04:03:39 2024 00:31:20.586 read: IOPS=170, BW=21.3MiB/s (22.3MB/s)(107MiB/5044msec) 00:31:20.586 slat (nsec): min=4433, max=29564, avg=12781.34, stdev=2391.39 00:31:20.587 clat (usec): min=7298, max=55962, avg=17567.35, stdev=15591.95 00:31:20.587 lat (usec): min=7311, max=55976, avg=17580.13, stdev=15591.84 00:31:20.587 clat percentiles (usec): 00:31:20.587 | 1.00th=[ 7439], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[ 8979], 00:31:20.587 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11338], 00:31:20.587 | 70.00th=[12518], 80.00th=[15270], 90.00th=[50594], 95.00th=[51643], 00:31:20.587 | 99.00th=[53740], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:31:20.587 | 99.99th=[55837] 00:31:20.587 bw ( KiB/s): min=16128, max=36352, per=33.87%, avg=21913.60, stdev=6235.99, samples=10 00:31:20.587 iops : min= 126, max= 284, avg=171.20, stdev=48.72, samples=10 00:31:20.587 lat (msec) : 10=41.49%, 20=40.79%, 50=6.29%, 100=11.42% 00:31:20.587 cpu : usr=93.48%, sys=6.07%, ctx=9, majf=0, minf=59 00:31:20.587 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.587 issued rwts: total=858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.587 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:20.587 filename0: (groupid=0, jobs=1): err= 0: pid=2524523: Sun Jul 14 04:03:39 2024 00:31:20.587 read: IOPS=137, BW=17.2MiB/s (18.0MB/s)(86.8MiB/5045msec) 00:31:20.587 slat (nsec): min=4733, max=28153, avg=13039.51, stdev=1757.64 00:31:20.587 clat (usec): min=5218, max=97266, avg=21726.98, stdev=21835.53 00:31:20.587 lat (usec): min=5230, max=97279, avg=21740.02, stdev=21835.63 00:31:20.587 clat percentiles (usec): 00:31:20.587 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6456], 20.00th=[ 7832], 00:31:20.587 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[11469], 60.00th=[13042], 00:31:20.587 | 70.00th=[14484], 80.00th=[50070], 90.00th=[54264], 95.00th=[56361], 00:31:20.587 | 99.00th=[94897], 99.50th=[95945], 99.90th=[96994], 99.95th=[96994], 00:31:20.587 | 99.99th=[96994] 00:31:20.587 bw ( KiB/s): min=13851, max=25856, per=27.31%, avg=17666.70, stdev=4203.78, samples=10 00:31:20.587 iops : min= 108, max= 202, avg=138.00, stdev=32.86, samples=10 00:31:20.587 lat (msec) : 10=41.64%, 20=34.01%, 50=3.89%, 100=20.46% 00:31:20.587 cpu : usr=89.93%, sys=7.61%, ctx=337, majf=0, minf=120 00:31:20.587 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.587 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.587 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:20.587 filename0: (groupid=0, jobs=1): err= 0: pid=2524524: Sun Jul 14 04:03:39 2024 00:31:20.587 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5034msec) 00:31:20.587 slat (nsec): min=4818, max=66964, avg=12558.52, stdev=3123.41 00:31:20.587 clat (usec): min=4835, max=93884, avg=15113.21, stdev=15383.70 00:31:20.587 lat (usec): min=4848, max=93897, avg=15125.77, stdev=15383.65 00:31:20.587 clat percentiles (usec): 00:31:20.587 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7504], 00:31:20.587 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:31:20.587 | 70.00th=[11207], 80.00th=[12518], 90.00th=[49546], 95.00th=[51643], 00:31:20.587 | 99.00th=[54264], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:31:20.587 | 99.99th=[93848] 00:31:20.587 bw ( KiB/s): min=17664, max=36096, per=39.38%, avg=25479.10, stdev=6483.36, samples=10 00:31:20.587 iops : min= 138, max= 282, avg=199.00, stdev=50.55, samples=10 00:31:20.587 lat (msec) : 10=59.82%, 20=26.75%, 50=4.41%, 100=9.02% 00:31:20.587 cpu : usr=92.35%, sys=6.72%, ctx=25, majf=0, minf=140 00:31:20.587 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.587 issued rwts: total=998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.587 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:20.587 00:31:20.587 Run status group 0 (all jobs): 00:31:20.587 READ: bw=63.2MiB/s (66.2MB/s), 17.2MiB/s-24.8MiB/s (18.0MB/s-26.0MB/s), io=319MiB (334MB), run=5034-5045msec 00:31:20.846 04:03:39 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:20.846 04:03:39 -- target/dif.sh@43 -- # local sub 00:31:20.846 04:03:39 -- target/dif.sh@45 -- # for sub in "$@" 00:31:20.846 04:03:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:20.846 04:03:39 -- target/dif.sh@36 -- # local sub_id=0 00:31:20.846 04:03:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:20.846 04:03:39 -- target/dif.sh@109 -- # bs=4k 00:31:20.846 04:03:39 -- target/dif.sh@109 -- # numjobs=8 00:31:20.846 04:03:39 -- target/dif.sh@109 -- # iodepth=16 00:31:20.846 04:03:39 -- target/dif.sh@109 -- # runtime= 00:31:20.846 04:03:39 -- target/dif.sh@109 -- # files=2 00:31:20.846 04:03:39 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:20.846 04:03:39 -- target/dif.sh@28 -- # local sub 00:31:20.846 04:03:39 -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.846 04:03:39 -- target/dif.sh@31 -- # create_subsystem 0 00:31:20.846 04:03:39 -- target/dif.sh@18 -- # local sub_id=0 00:31:20.846 04:03:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 bdev_null0 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 [2024-07-14 04:03:39.655812] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.846 04:03:39 -- target/dif.sh@31 -- # create_subsystem 1 00:31:20.846 04:03:39 -- target/dif.sh@18 -- # local sub_id=1 00:31:20.846 04:03:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 bdev_null1 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.846 04:03:39 -- target/dif.sh@31 -- # create_subsystem 2 00:31:20.846 04:03:39 -- target/dif.sh@18 -- # local sub_id=2 00:31:20.846 04:03:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 bdev_null2 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:20.846 04:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.846 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:31:20.846 04:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.846 04:03:39 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:20.846 04:03:39 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:20.846 04:03:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.846 04:03:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:20.846 04:03:39 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.846 04:03:39 -- target/dif.sh@82 -- # gen_fio_conf 00:31:20.846 04:03:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:20.846 04:03:39 -- nvmf/common.sh@520 -- # config=() 00:31:20.846 04:03:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.846 04:03:39 -- nvmf/common.sh@520 -- # local subsystem config 00:31:20.846 04:03:39 -- target/dif.sh@54 -- # local file 00:31:20.846 04:03:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:20.846 04:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:20.846 04:03:39 -- target/dif.sh@56 -- # cat 00:31:20.846 04:03:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.846 04:03:39 -- common/autotest_common.sh@1320 -- # shift 00:31:20.846 04:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:20.846 { 00:31:20.846 "params": { 00:31:20.846 "name": "Nvme$subsystem", 00:31:20.846 "trtype": "$TEST_TRANSPORT", 00:31:20.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.847 "adrfam": "ipv4", 00:31:20.847 "trsvcid": "$NVMF_PORT", 00:31:20.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.847 "hdgst": ${hdgst:-false}, 00:31:20.847 "ddgst": ${ddgst:-false} 00:31:20.847 }, 00:31:20.847 "method": "bdev_nvme_attach_controller" 00:31:20.847 } 00:31:20.847 EOF 00:31:20.847 )") 00:31:20.847 04:03:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:20.847 04:03:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.847 04:03:39 -- nvmf/common.sh@542 -- # cat 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:20.847 04:03:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:20.847 04:03:39 -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.847 04:03:39 -- target/dif.sh@73 -- # cat 00:31:20.847 04:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:20.847 04:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:20.847 { 00:31:20.847 "params": { 00:31:20.847 "name": "Nvme$subsystem", 00:31:20.847 "trtype": "$TEST_TRANSPORT", 00:31:20.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.847 "adrfam": "ipv4", 00:31:20.847 "trsvcid": "$NVMF_PORT", 00:31:20.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.847 "hdgst": ${hdgst:-false}, 00:31:20.847 "ddgst": ${ddgst:-false} 00:31:20.847 }, 00:31:20.847 "method": "bdev_nvme_attach_controller" 00:31:20.847 } 00:31:20.847 EOF 00:31:20.847 )") 00:31:20.847 04:03:39 -- target/dif.sh@72 -- # (( file++ )) 00:31:20.847 04:03:39 -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.847 04:03:39 -- target/dif.sh@73 -- # cat 00:31:20.847 04:03:39 -- nvmf/common.sh@542 -- # cat 00:31:20.847 04:03:39 -- target/dif.sh@72 -- # (( file++ )) 00:31:20.847 04:03:39 -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.847 04:03:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:20.847 04:03:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:20.847 { 00:31:20.847 "params": { 00:31:20.847 "name": "Nvme$subsystem", 00:31:20.847 "trtype": "$TEST_TRANSPORT", 00:31:20.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.847 "adrfam": "ipv4", 00:31:20.847 "trsvcid": "$NVMF_PORT", 00:31:20.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.847 "hdgst": ${hdgst:-false}, 00:31:20.847 "ddgst": ${ddgst:-false} 00:31:20.847 }, 00:31:20.847 "method": "bdev_nvme_attach_controller" 00:31:20.847 } 00:31:20.847 EOF 00:31:20.847 )") 00:31:20.847 04:03:39 -- nvmf/common.sh@542 -- # cat 00:31:20.847 04:03:39 -- nvmf/common.sh@544 -- # jq . 00:31:20.847 04:03:39 -- nvmf/common.sh@545 -- # IFS=, 00:31:20.847 04:03:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:20.847 "params": { 00:31:20.847 "name": "Nvme0", 00:31:20.847 "trtype": "tcp", 00:31:20.847 "traddr": "10.0.0.2", 00:31:20.847 "adrfam": "ipv4", 00:31:20.847 "trsvcid": "4420", 00:31:20.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:20.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:20.847 "hdgst": false, 00:31:20.847 "ddgst": false 00:31:20.847 }, 00:31:20.847 "method": "bdev_nvme_attach_controller" 00:31:20.847 },{ 00:31:20.847 "params": { 00:31:20.847 "name": "Nvme1", 00:31:20.847 "trtype": "tcp", 00:31:20.847 "traddr": "10.0.0.2", 00:31:20.847 "adrfam": "ipv4", 00:31:20.847 "trsvcid": "4420", 00:31:20.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.847 "hdgst": false, 00:31:20.847 "ddgst": false 00:31:20.847 }, 00:31:20.847 "method": "bdev_nvme_attach_controller" 00:31:20.847 },{ 00:31:20.847 "params": { 00:31:20.847 "name": "Nvme2", 00:31:20.847 "trtype": "tcp", 00:31:20.847 "traddr": "10.0.0.2", 00:31:20.847 "adrfam": "ipv4", 00:31:20.847 "trsvcid": "4420", 00:31:20.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:20.847 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:20.847 "hdgst": false, 00:31:20.847 "ddgst": false 00:31:20.847 }, 00:31:20.847 "method": "bdev_nvme_attach_controller" 00:31:20.847 }' 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:20.847 04:03:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:20.847 04:03:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:20.847 04:03:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:20.847 04:03:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:20.847 04:03:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:20.847 04:03:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.105 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:21.105 ... 00:31:21.105 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:21.105 ... 00:31:21.105 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:21.105 ... 00:31:21.105 fio-3.35 00:31:21.105 Starting 24 threads 00:31:21.105 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.094 [2024-07-14 04:03:40.956437] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:22.094 [2024-07-14 04:03:40.956517] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:34.298 00:31:34.298 filename0: (groupid=0, jobs=1): err= 0: pid=2525414: Sun Jul 14 04:03:51 2024 00:31:34.298 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10003msec) 00:31:34.298 slat (usec): min=8, max=173, avg=42.57, stdev=21.72 00:31:34.298 clat (usec): min=14425, max=48802, avg=31971.44, stdev=2038.42 00:31:34.298 lat (usec): min=14459, max=48836, avg=32014.01, stdev=2037.80 00:31:34.298 clat percentiles (usec): 00:31:34.298 | 1.00th=[29492], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:34.298 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:31:34.298 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32900], 95.00th=[34341], 00:31:34.298 | 99.00th=[40109], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:31:34.298 | 99.99th=[49021] 00:31:34.298 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1973.89, stdev=77.69, samples=19 00:31:34.298 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:31:34.298 lat (msec) : 20=0.32%, 50=99.68% 00:31:34.298 cpu : usr=98.13%, sys=1.30%, ctx=148, majf=0, minf=26 00:31:34.298 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.298 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.298 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.298 filename0: (groupid=0, jobs=1): err= 0: pid=2525415: Sun Jul 14 04:03:51 2024 00:31:34.298 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:31:34.298 slat (usec): min=8, max=158, avg=51.20, stdev=29.83 00:31:34.298 clat (usec): min=20667, max=53568, avg=31948.74, stdev=1733.29 00:31:34.298 lat (usec): min=20737, max=53610, avg=31999.94, stdev=1729.42 00:31:34.298 clat percentiles (usec): 00:31:34.298 | 1.00th=[29492], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:34.298 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:31:34.298 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32900], 95.00th=[34866], 00:31:34.298 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:31:34.298 | 99.99th=[53740] 00:31:34.298 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1973.89, stdev=77.69, samples=19 00:31:34.298 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:31:34.298 lat (msec) : 50=99.96%, 100=0.04% 00:31:34.298 cpu : usr=98.42%, sys=1.15%, ctx=19, majf=0, minf=26 00:31:34.298 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:34.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.298 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.298 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename0: (groupid=0, jobs=1): err= 0: pid=2525416: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=493, BW=1975KiB/s (2023kB/s)(19.3MiB/10008msec) 00:31:34.299 slat (usec): min=8, max=157, avg=66.99, stdev=33.05 00:31:34.299 clat (usec): min=12260, max=52470, avg=31835.96, stdev=2123.06 00:31:34.299 lat (usec): min=12360, max=52500, avg=31902.95, stdev=2118.11 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30540], 20.00th=[30802], 00:31:34.299 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:31:34.299 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[34866], 00:31:34.299 | 99.00th=[40633], 99.50th=[41681], 99.90th=[49021], 99.95th=[50594], 00:31:34.299 | 99.99th=[52691] 00:31:34.299 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1973.05, stdev=76.93, samples=19 00:31:34.299 iops : min= 448, max= 512, avg=493.26, stdev=19.23, samples=19 00:31:34.299 lat (msec) : 20=0.20%, 50=99.72%, 100=0.08% 00:31:34.299 cpu : usr=98.58%, sys=0.97%, ctx=14, majf=0, minf=22 00:31:34.299 IO depths : 1=4.2%, 2=10.4%, 4=24.6%, 8=52.5%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename0: (groupid=0, jobs=1): err= 0: pid=2525417: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.6MiB/10029msec) 00:31:34.299 slat (usec): min=5, max=162, avg=25.18, stdev=11.69 00:31:34.299 clat (usec): min=5126, max=51288, avg=31860.89, stdev=3549.14 00:31:34.299 lat (usec): min=5144, max=51310, avg=31886.06, stdev=3550.33 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[16909], 5.00th=[29754], 10.00th=[31065], 20.00th=[31327], 00:31:34.299 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:34.299 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33817], 95.00th=[35390], 00:31:34.299 | 99.00th=[41157], 99.50th=[44827], 99.90th=[51119], 99.95th=[51119], 00:31:34.299 | 99.99th=[51119] 00:31:34.299 bw ( KiB/s): min= 1792, max= 2176, per=4.22%, avg=1994.55, stdev=85.91, samples=20 00:31:34.299 iops : min= 448, max= 544, avg=498.60, stdev=21.46, samples=20 00:31:34.299 lat (msec) : 10=0.94%, 20=0.62%, 50=98.16%, 100=0.28% 00:31:34.299 cpu : usr=92.71%, sys=3.62%, ctx=182, majf=0, minf=45 00:31:34.299 IO depths : 1=3.5%, 2=9.3%, 4=23.6%, 8=54.6%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=5005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename0: (groupid=0, jobs=1): err= 0: pid=2525418: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10002msec) 00:31:34.299 slat (nsec): min=8198, max=87298, avg=24925.83, stdev=13398.32 00:31:34.299 clat (usec): min=10711, max=69491, avg=32865.18, stdev=5744.78 00:31:34.299 lat (usec): min=10767, max=69513, avg=32890.10, stdev=5745.18 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[16909], 5.00th=[23987], 10.00th=[30278], 20.00th=[31327], 00:31:34.299 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:34.299 | 70.00th=[32375], 80.00th=[33817], 90.00th=[39584], 95.00th=[44303], 00:31:34.299 | 99.00th=[53740], 99.50th=[60031], 99.90th=[63701], 99.95th=[69731], 00:31:34.299 | 99.99th=[69731] 00:31:34.299 bw ( KiB/s): min= 1760, max= 2048, per=4.08%, avg=1931.79, stdev=76.89, samples=19 00:31:34.299 iops : min= 440, max= 512, avg=482.95, stdev=19.22, samples=19 00:31:34.299 lat (msec) : 20=2.58%, 50=95.54%, 100=1.88% 00:31:34.299 cpu : usr=93.41%, sys=3.20%, ctx=107, majf=0, minf=25 00:31:34.299 IO depths : 1=2.3%, 2=4.7%, 4=12.9%, 8=69.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=4844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename0: (groupid=0, jobs=1): err= 0: pid=2525419: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=493, BW=1972KiB/s (2020kB/s)(19.3MiB/10006msec) 00:31:34.299 slat (nsec): min=4680, max=93381, avg=30554.51, stdev=14430.27 00:31:34.299 clat (usec): min=14501, max=58140, avg=32220.10, stdev=2622.64 00:31:34.299 lat (usec): min=14527, max=58163, avg=32250.65, stdev=2622.95 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[27657], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:34.299 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:34.299 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33424], 95.00th=[35914], 00:31:34.299 | 99.00th=[41157], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:31:34.299 | 99.99th=[57934] 00:31:34.299 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1962.95, stdev=88.61, samples=19 00:31:34.299 iops : min= 448, max= 512, avg=490.74, stdev=22.15, samples=19 00:31:34.299 lat (msec) : 20=0.36%, 50=99.11%, 100=0.53% 00:31:34.299 cpu : usr=98.47%, sys=1.12%, ctx=16, majf=0, minf=24 00:31:34.299 IO depths : 1=1.8%, 2=6.6%, 4=19.6%, 8=60.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=93.2%, 8=2.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename0: (groupid=0, jobs=1): err= 0: pid=2525420: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:31:34.299 slat (usec): min=8, max=182, avg=38.59, stdev=16.52 00:31:34.299 clat (usec): min=15025, max=59749, avg=32070.89, stdev=2256.69 00:31:34.299 lat (usec): min=15034, max=59787, avg=32109.48, stdev=2258.25 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[28181], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:34.299 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.299 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34866], 00:31:34.299 | 99.00th=[41157], 99.50th=[42730], 99.90th=[50594], 99.95th=[52691], 00:31:34.299 | 99.99th=[59507] 00:31:34.299 bw ( KiB/s): min= 1792, max= 2064, per=4.17%, avg=1973.89, stdev=76.59, samples=19 00:31:34.299 iops : min= 448, max= 516, avg=493.47, stdev=19.15, samples=19 00:31:34.299 lat (msec) : 20=0.36%, 50=99.51%, 100=0.12% 00:31:34.299 cpu : usr=91.11%, sys=4.16%, ctx=236, majf=0, minf=22 00:31:34.299 IO depths : 1=4.2%, 2=10.4%, 4=24.7%, 8=52.4%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename0: (groupid=0, jobs=1): err= 0: pid=2525421: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:31:34.299 slat (usec): min=8, max=124, avg=35.09, stdev=15.59 00:31:34.299 clat (usec): min=15825, max=53873, avg=32108.64, stdev=1940.99 00:31:34.299 lat (usec): min=15833, max=53910, avg=32143.73, stdev=1941.55 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[29230], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:34.299 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.299 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33424], 95.00th=[34866], 00:31:34.299 | 99.00th=[40109], 99.50th=[41157], 99.90th=[46400], 99.95th=[50070], 00:31:34.299 | 99.99th=[53740] 00:31:34.299 bw ( KiB/s): min= 1776, max= 2064, per=4.17%, avg=1973.89, stdev=76.96, samples=19 00:31:34.299 iops : min= 444, max= 516, avg=493.47, stdev=19.24, samples=19 00:31:34.299 lat (msec) : 20=0.16%, 50=99.80%, 100=0.04% 00:31:34.299 cpu : usr=98.36%, sys=1.21%, ctx=14, majf=0, minf=20 00:31:34.299 IO depths : 1=3.3%, 2=9.5%, 4=24.9%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename1: (groupid=0, jobs=1): err= 0: pid=2525422: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10002msec) 00:31:34.299 slat (usec): min=8, max=136, avg=32.05, stdev=24.28 00:31:34.299 clat (usec): min=7413, max=68372, avg=32355.14, stdev=4345.62 00:31:34.299 lat (usec): min=7428, max=68401, avg=32387.20, stdev=4345.00 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[17433], 5.00th=[30278], 10.00th=[31065], 20.00th=[31327], 00:31:34.299 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:34.299 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[39584], 00:31:34.299 | 99.00th=[51643], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:31:34.299 | 99.99th=[68682] 00:31:34.299 bw ( KiB/s): min= 1792, max= 2080, per=4.14%, avg=1955.37, stdev=74.97, samples=19 00:31:34.299 iops : min= 448, max= 520, avg=488.84, stdev=18.74, samples=19 00:31:34.299 lat (msec) : 10=0.24%, 20=1.99%, 50=96.66%, 100=1.10% 00:31:34.299 cpu : usr=98.59%, sys=1.01%, ctx=16, majf=0, minf=29 00:31:34.299 IO depths : 1=0.5%, 2=3.3%, 4=12.8%, 8=68.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=91.9%, 8=5.0%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=4916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.299 filename1: (groupid=0, jobs=1): err= 0: pid=2525423: Sun Jul 14 04:03:51 2024 00:31:34.299 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10002msec) 00:31:34.299 slat (usec): min=5, max=157, avg=34.20, stdev=15.29 00:31:34.299 clat (usec): min=6698, max=41112, avg=31871.67, stdev=2819.70 00:31:34.299 lat (usec): min=6709, max=41180, avg=31905.87, stdev=2820.40 00:31:34.299 clat percentiles (usec): 00:31:34.299 | 1.00th=[24249], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:34.299 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.299 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34866], 00:31:34.299 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:31:34.299 | 99.99th=[41157] 00:31:34.299 bw ( KiB/s): min= 1792, max= 2176, per=4.20%, avg=1987.37, stdev=89.18, samples=19 00:31:34.299 iops : min= 448, max= 544, avg=496.84, stdev=22.29, samples=19 00:31:34.299 lat (msec) : 10=0.64%, 20=0.32%, 50=99.04% 00:31:34.299 cpu : usr=94.15%, sys=2.77%, ctx=160, majf=0, minf=30 00:31:34.299 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:34.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.299 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename1: (groupid=0, jobs=1): err= 0: pid=2525424: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=480, BW=1921KiB/s (1968kB/s)(18.8MiB/10001msec) 00:31:34.300 slat (usec): min=8, max=122, avg=20.75, stdev=10.16 00:31:34.300 clat (usec): min=7518, max=61799, avg=33193.34, stdev=7223.87 00:31:34.300 lat (usec): min=7528, max=61836, avg=33214.09, stdev=7223.28 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[14877], 5.00th=[19792], 10.00th=[27395], 20.00th=[31327], 00:31:34.300 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:34.300 | 70.00th=[32900], 80.00th=[35914], 90.00th=[43779], 95.00th=[47449], 00:31:34.300 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[61604], 00:31:34.300 | 99.99th=[61604] 00:31:34.300 bw ( KiB/s): min= 1440, max= 2080, per=4.06%, avg=1920.16, stdev=148.13, samples=19 00:31:34.300 iops : min= 360, max= 520, avg=480.00, stdev=37.05, samples=19 00:31:34.300 lat (msec) : 10=0.71%, 20=5.52%, 50=91.17%, 100=2.60% 00:31:34.300 cpu : usr=92.40%, sys=3.57%, ctx=189, majf=0, minf=29 00:31:34.300 IO depths : 1=0.1%, 2=1.8%, 4=10.5%, 8=72.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=91.3%, 8=5.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename1: (groupid=0, jobs=1): err= 0: pid=2525425: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:31:34.300 slat (usec): min=8, max=161, avg=48.54, stdev=28.44 00:31:34.300 clat (usec): min=14698, max=58830, avg=32362.56, stdev=3295.86 00:31:34.300 lat (usec): min=14711, max=58886, avg=32411.10, stdev=3298.11 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[23462], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:34.300 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.300 | 70.00th=[32113], 80.00th=[32375], 90.00th=[34341], 95.00th=[38536], 00:31:34.300 | 99.00th=[45876], 99.50th=[51643], 99.90th=[57410], 99.95th=[58983], 00:31:34.300 | 99.99th=[58983] 00:31:34.300 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1954.53, stdev=76.04, samples=19 00:31:34.300 iops : min= 448, max= 512, avg=488.63, stdev=19.01, samples=19 00:31:34.300 lat (msec) : 20=0.61%, 50=98.73%, 100=0.65% 00:31:34.300 cpu : usr=98.40%, sys=1.16%, ctx=12, majf=0, minf=17 00:31:34.300 IO depths : 1=2.3%, 2=7.2%, 4=20.2%, 8=59.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=93.2%, 8=1.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename1: (groupid=0, jobs=1): err= 0: pid=2525426: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10009msec) 00:31:34.300 slat (usec): min=8, max=153, avg=70.70, stdev=31.04 00:31:34.300 clat (usec): min=17075, max=54622, avg=32026.11, stdev=2952.31 00:31:34.300 lat (usec): min=17156, max=54653, avg=32096.81, stdev=2950.22 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[23200], 5.00th=[30278], 10.00th=[30540], 20.00th=[30802], 00:31:34.300 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:31:34.300 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33817], 95.00th=[36439], 00:31:34.300 | 99.00th=[43779], 99.50th=[52691], 99.90th=[54264], 99.95th=[54789], 00:31:34.300 | 99.99th=[54789] 00:31:34.300 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1960.42, stdev=76.85, samples=19 00:31:34.300 iops : min= 448, max= 512, avg=490.11, stdev=19.21, samples=19 00:31:34.300 lat (msec) : 20=0.59%, 50=98.80%, 100=0.61% 00:31:34.300 cpu : usr=98.51%, sys=1.02%, ctx=12, majf=0, minf=19 00:31:34.300 IO depths : 1=4.6%, 2=10.0%, 4=23.4%, 8=53.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename1: (groupid=0, jobs=1): err= 0: pid=2525427: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10006msec) 00:31:34.300 slat (usec): min=8, max=431, avg=39.11, stdev=20.28 00:31:34.300 clat (usec): min=14543, max=52120, avg=32024.25, stdev=2348.43 00:31:34.300 lat (usec): min=14591, max=52139, avg=32063.36, stdev=2345.99 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[29230], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:34.300 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.300 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[34341], 00:31:34.300 | 99.00th=[40633], 99.50th=[44303], 99.90th=[52167], 99.95th=[52167], 00:31:34.300 | 99.99th=[52167] 00:31:34.300 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1967.16, stdev=87.55, samples=19 00:31:34.300 iops : min= 448, max= 512, avg=491.79, stdev=21.89, samples=19 00:31:34.300 lat (msec) : 20=0.53%, 50=99.15%, 100=0.32% 00:31:34.300 cpu : usr=90.79%, sys=4.66%, ctx=1074, majf=0, minf=27 00:31:34.300 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename1: (groupid=0, jobs=1): err= 0: pid=2525428: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=498, BW=1996KiB/s (2044kB/s)(19.5MiB/10009msec) 00:31:34.300 slat (usec): min=8, max=162, avg=16.41, stdev=12.49 00:31:34.300 clat (usec): min=7686, max=57269, avg=31934.33, stdev=5249.66 00:31:34.300 lat (usec): min=7695, max=57279, avg=31950.74, stdev=5249.89 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[16319], 5.00th=[20055], 10.00th=[30016], 20.00th=[31327], 00:31:34.300 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:34.300 | 70.00th=[32113], 80.00th=[32637], 90.00th=[35390], 95.00th=[43254], 00:31:34.300 | 99.00th=[47449], 99.50th=[50070], 99.90th=[57410], 99.95th=[57410], 00:31:34.300 | 99.99th=[57410] 00:31:34.300 bw ( KiB/s): min= 1792, max= 2400, per=4.22%, avg=1994.95, stdev=120.60, samples=19 00:31:34.300 iops : min= 448, max= 600, avg=498.74, stdev=30.15, samples=19 00:31:34.300 lat (msec) : 10=0.12%, 20=5.25%, 50=94.05%, 100=0.58% 00:31:34.300 cpu : usr=98.55%, sys=1.07%, ctx=14, majf=0, minf=36 00:31:34.300 IO depths : 1=4.0%, 2=9.2%, 4=20.8%, 8=57.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename1: (groupid=0, jobs=1): err= 0: pid=2525429: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10015msec) 00:31:34.300 slat (nsec): min=8113, max=90195, avg=28785.61, stdev=13454.65 00:31:34.300 clat (usec): min=14731, max=61051, avg=32240.20, stdev=3727.93 00:31:34.300 lat (usec): min=14760, max=61089, avg=32268.99, stdev=3727.16 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[18220], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:34.300 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:34.300 | 70.00th=[32113], 80.00th=[32375], 90.00th=[34341], 95.00th=[36963], 00:31:34.300 | 99.00th=[46924], 99.50th=[53216], 99.90th=[61080], 99.95th=[61080], 00:31:34.300 | 99.99th=[61080] 00:31:34.300 bw ( KiB/s): min= 1792, max= 2112, per=4.15%, avg=1962.95, stdev=90.36, samples=19 00:31:34.300 iops : min= 448, max= 528, avg=490.74, stdev=22.59, samples=19 00:31:34.300 lat (msec) : 20=1.72%, 50=97.75%, 100=0.53% 00:31:34.300 cpu : usr=98.65%, sys=0.96%, ctx=13, majf=0, minf=18 00:31:34.300 IO depths : 1=3.9%, 2=9.6%, 4=23.2%, 8=54.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename2: (groupid=0, jobs=1): err= 0: pid=2525430: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.3MiB/10002msec) 00:31:34.300 slat (nsec): min=8218, max=96730, avg=34426.23, stdev=14126.46 00:31:34.300 clat (usec): min=10293, max=55887, avg=32067.70, stdev=2839.92 00:31:34.300 lat (usec): min=10302, max=55899, avg=32102.13, stdev=2840.57 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[21890], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:34.300 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.300 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33817], 95.00th=[35390], 00:31:34.300 | 99.00th=[42206], 99.50th=[47449], 99.90th=[52167], 99.95th=[52167], 00:31:34.300 | 99.99th=[55837] 00:31:34.300 bw ( KiB/s): min= 1795, max= 2048, per=4.18%, avg=1975.74, stdev=71.80, samples=19 00:31:34.300 iops : min= 448, max= 512, avg=493.89, stdev=18.06, samples=19 00:31:34.300 lat (msec) : 20=0.85%, 50=98.99%, 100=0.16% 00:31:34.300 cpu : usr=97.06%, sys=1.85%, ctx=301, majf=0, minf=25 00:31:34.300 IO depths : 1=0.4%, 2=6.0%, 4=23.3%, 8=57.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.300 filename2: (groupid=0, jobs=1): err= 0: pid=2525431: Sun Jul 14 04:03:51 2024 00:31:34.300 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10010msec) 00:31:34.300 slat (usec): min=8, max=146, avg=38.36, stdev=17.57 00:31:34.300 clat (usec): min=14895, max=56498, avg=32091.79, stdev=2354.21 00:31:34.300 lat (usec): min=14928, max=56534, avg=32130.15, stdev=2355.22 00:31:34.300 clat percentiles (usec): 00:31:34.300 | 1.00th=[28181], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:34.300 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.300 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[34866], 00:31:34.300 | 99.00th=[40109], 99.50th=[41157], 99.90th=[56361], 99.95th=[56361], 00:31:34.300 | 99.99th=[56361] 00:31:34.300 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1967.16, stdev=86.41, samples=19 00:31:34.300 iops : min= 448, max= 512, avg=491.79, stdev=21.60, samples=19 00:31:34.300 lat (msec) : 20=0.36%, 50=99.31%, 100=0.32% 00:31:34.300 cpu : usr=98.66%, sys=0.94%, ctx=14, majf=0, minf=27 00:31:34.300 IO depths : 1=1.7%, 2=7.9%, 4=24.9%, 8=54.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:34.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.300 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.301 filename2: (groupid=0, jobs=1): err= 0: pid=2525432: Sun Jul 14 04:03:51 2024 00:31:34.301 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:31:34.301 slat (usec): min=8, max=117, avg=34.21, stdev=15.31 00:31:34.301 clat (usec): min=23570, max=41119, avg=32096.85, stdev=1859.80 00:31:34.301 lat (usec): min=23582, max=41128, avg=32131.05, stdev=1860.64 00:31:34.301 clat percentiles (usec): 00:31:34.301 | 1.00th=[28443], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:34.301 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.301 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33817], 95.00th=[35390], 00:31:34.301 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:31:34.301 | 99.99th=[41157] 00:31:34.301 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1973.89, stdev=77.69, samples=19 00:31:34.301 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:31:34.301 lat (msec) : 50=100.00% 00:31:34.301 cpu : usr=98.61%, sys=0.96%, ctx=28, majf=0, minf=23 00:31:34.301 IO depths : 1=5.0%, 2=10.9%, 4=23.5%, 8=53.1%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:34.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.301 filename2: (groupid=0, jobs=1): err= 0: pid=2525433: Sun Jul 14 04:03:51 2024 00:31:34.301 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:31:34.301 slat (usec): min=8, max=164, avg=35.57, stdev=15.23 00:31:34.301 clat (usec): min=18973, max=46391, avg=32083.59, stdev=1865.89 00:31:34.301 lat (usec): min=18995, max=46411, avg=32119.17, stdev=1867.74 00:31:34.301 clat percentiles (usec): 00:31:34.301 | 1.00th=[27919], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:34.301 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.301 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[35390], 00:31:34.301 | 99.00th=[40109], 99.50th=[40633], 99.90th=[45351], 99.95th=[45351], 00:31:34.301 | 99.99th=[46400] 00:31:34.301 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1973.89, stdev=77.69, samples=19 00:31:34.301 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:31:34.301 lat (msec) : 20=0.12%, 50=99.88% 00:31:34.301 cpu : usr=98.39%, sys=1.21%, ctx=18, majf=0, minf=23 00:31:34.301 IO depths : 1=4.7%, 2=10.6%, 4=23.9%, 8=52.9%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:34.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.301 filename2: (groupid=0, jobs=1): err= 0: pid=2525434: Sun Jul 14 04:03:51 2024 00:31:34.301 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10003msec) 00:31:34.301 slat (usec): min=8, max=736, avg=42.11, stdev=29.42 00:31:34.301 clat (usec): min=9791, max=62684, avg=32263.32, stdev=3606.00 00:31:34.301 lat (usec): min=9800, max=62703, avg=32305.43, stdev=3604.03 00:31:34.301 clat percentiles (usec): 00:31:34.301 | 1.00th=[20579], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:31:34.301 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.301 | 70.00th=[32113], 80.00th=[32375], 90.00th=[34341], 95.00th=[39584], 00:31:34.301 | 99.00th=[47973], 99.50th=[51119], 99.90th=[56361], 99.95th=[62653], 00:31:34.301 | 99.99th=[62653] 00:31:34.301 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1962.11, stdev=83.86, samples=19 00:31:34.301 iops : min= 448, max= 512, avg=490.53, stdev=20.96, samples=19 00:31:34.301 lat (msec) : 10=0.12%, 20=0.77%, 50=98.58%, 100=0.53% 00:31:34.301 cpu : usr=95.16%, sys=2.54%, ctx=153, majf=0, minf=14 00:31:34.301 IO depths : 1=0.3%, 2=4.8%, 4=19.1%, 8=62.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:34.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 complete : 0=0.0%, 4=93.2%, 8=2.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.301 filename2: (groupid=0, jobs=1): err= 0: pid=2525435: Sun Jul 14 04:03:51 2024 00:31:34.301 read: IOPS=494, BW=1979KiB/s (2026kB/s)(19.3MiB/10002msec) 00:31:34.301 slat (usec): min=8, max=128, avg=25.45, stdev=20.10 00:31:34.301 clat (usec): min=14200, max=56840, avg=32196.89, stdev=3146.74 00:31:34.301 lat (usec): min=14244, max=56875, avg=32222.34, stdev=3146.23 00:31:34.301 clat percentiles (usec): 00:31:34.301 | 1.00th=[21365], 5.00th=[30278], 10.00th=[31065], 20.00th=[31327], 00:31:34.301 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:34.301 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33817], 95.00th=[36439], 00:31:34.301 | 99.00th=[44827], 99.50th=[47449], 99.90th=[53216], 99.95th=[56361], 00:31:34.301 | 99.99th=[56886] 00:31:34.301 bw ( KiB/s): min= 1795, max= 2112, per=4.17%, avg=1971.53, stdev=69.64, samples=19 00:31:34.301 iops : min= 448, max= 528, avg=492.84, stdev=17.52, samples=19 00:31:34.301 lat (msec) : 20=0.77%, 50=99.09%, 100=0.14% 00:31:34.301 cpu : usr=98.07%, sys=1.40%, ctx=33, majf=0, minf=25 00:31:34.301 IO depths : 1=0.2%, 2=2.4%, 4=9.6%, 8=71.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:31:34.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 complete : 0=0.0%, 4=91.2%, 8=6.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.301 filename2: (groupid=0, jobs=1): err= 0: pid=2525436: Sun Jul 14 04:03:51 2024 00:31:34.301 read: IOPS=502, BW=2008KiB/s (2057kB/s)(19.6MiB/10002msec) 00:31:34.301 slat (usec): min=7, max=520, avg=28.00, stdev=20.16 00:31:34.301 clat (usec): min=5294, max=54781, avg=31632.57, stdev=3922.33 00:31:34.301 lat (usec): min=5303, max=54806, avg=31660.57, stdev=3922.53 00:31:34.301 clat percentiles (usec): 00:31:34.301 | 1.00th=[ 8717], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:31:34.301 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.301 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34866], 00:31:34.301 | 99.00th=[41157], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:31:34.301 | 99.99th=[54789] 00:31:34.301 bw ( KiB/s): min= 1848, max= 2360, per=4.24%, avg=2006.74, stdev=108.92, samples=19 00:31:34.301 iops : min= 462, max= 590, avg=501.68, stdev=27.23, samples=19 00:31:34.301 lat (msec) : 10=1.33%, 20=1.51%, 50=97.11%, 100=0.04% 00:31:34.301 cpu : usr=90.65%, sys=4.49%, ctx=154, majf=0, minf=47 00:31:34.301 IO depths : 1=3.9%, 2=8.5%, 4=21.8%, 8=57.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:34.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.301 filename2: (groupid=0, jobs=1): err= 0: pid=2525437: Sun Jul 14 04:03:51 2024 00:31:34.301 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.4MiB/10024msec) 00:31:34.301 slat (usec): min=5, max=180, avg=39.64, stdev=20.08 00:31:34.301 clat (usec): min=6032, max=49575, avg=31924.52, stdev=2628.07 00:31:34.301 lat (usec): min=6050, max=49627, avg=31964.16, stdev=2630.35 00:31:34.301 clat percentiles (usec): 00:31:34.301 | 1.00th=[25822], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:34.301 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:34.301 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34866], 00:31:34.301 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[45876], 00:31:34.301 | 99.99th=[49546] 00:31:34.301 bw ( KiB/s): min= 1792, max= 2064, per=4.20%, avg=1984.00, stdev=75.40, samples=20 00:31:34.301 iops : min= 448, max= 516, avg=496.00, stdev=18.85, samples=20 00:31:34.301 lat (msec) : 10=0.32%, 20=0.36%, 50=99.32% 00:31:34.301 cpu : usr=98.48%, sys=1.10%, ctx=13, majf=0, minf=19 00:31:34.301 IO depths : 1=3.7%, 2=9.8%, 4=24.7%, 8=52.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:34.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.301 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.301 00:31:34.301 Run status group 0 (all jobs): 00:31:34.301 READ: bw=46.2MiB/s (48.4MB/s), 1921KiB/s-2008KiB/s (1968kB/s-2057kB/s), io=463MiB (485MB), run=10001-10029msec 00:31:34.301 04:03:51 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:34.301 04:03:51 -- target/dif.sh@43 -- # local sub 00:31:34.301 04:03:51 -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.301 04:03:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:34.301 04:03:51 -- target/dif.sh@36 -- # local sub_id=0 00:31:34.301 04:03:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:34.301 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.301 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.301 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.301 04:03:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:34.301 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.301 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.301 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.301 04:03:51 -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.301 04:03:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:34.301 04:03:51 -- target/dif.sh@36 -- # local sub_id=1 00:31:34.301 04:03:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:34.301 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.301 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.301 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.301 04:03:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:34.301 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.301 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.301 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.301 04:03:51 -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.301 04:03:51 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:34.301 04:03:51 -- target/dif.sh@36 -- # local sub_id=2 00:31:34.301 04:03:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:34.301 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.301 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.301 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.301 04:03:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:34.301 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.301 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.301 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.301 04:03:51 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:34.301 04:03:51 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:34.301 04:03:51 -- target/dif.sh@115 -- # numjobs=2 00:31:34.301 04:03:51 -- target/dif.sh@115 -- # iodepth=8 00:31:34.301 04:03:51 -- target/dif.sh@115 -- # runtime=5 00:31:34.301 04:03:51 -- target/dif.sh@115 -- # files=1 00:31:34.302 04:03:51 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:34.302 04:03:51 -- target/dif.sh@28 -- # local sub 00:31:34.302 04:03:51 -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.302 04:03:51 -- target/dif.sh@31 -- # create_subsystem 0 00:31:34.302 04:03:51 -- target/dif.sh@18 -- # local sub_id=0 00:31:34.302 04:03:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 bdev_null0 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 [2024-07-14 04:03:51.523049] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.302 04:03:51 -- target/dif.sh@31 -- # create_subsystem 1 00:31:34.302 04:03:51 -- target/dif.sh@18 -- # local sub_id=1 00:31:34.302 04:03:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 bdev_null1 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:34.302 04:03:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.302 04:03:51 -- common/autotest_common.sh@10 -- # set +x 00:31:34.302 04:03:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.302 04:03:51 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:34.302 04:03:51 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:34.302 04:03:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:34.302 04:03:51 -- nvmf/common.sh@520 -- # config=() 00:31:34.302 04:03:51 -- nvmf/common.sh@520 -- # local subsystem config 00:31:34.302 04:03:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:34.302 04:03:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.302 04:03:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:34.302 { 00:31:34.302 "params": { 00:31:34.302 "name": "Nvme$subsystem", 00:31:34.302 "trtype": "$TEST_TRANSPORT", 00:31:34.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.302 "adrfam": "ipv4", 00:31:34.302 "trsvcid": "$NVMF_PORT", 00:31:34.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.302 "hdgst": ${hdgst:-false}, 00:31:34.302 "ddgst": ${ddgst:-false} 00:31:34.302 }, 00:31:34.302 "method": "bdev_nvme_attach_controller" 00:31:34.302 } 00:31:34.302 EOF 00:31:34.302 )") 00:31:34.302 04:03:51 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.302 04:03:51 -- target/dif.sh@82 -- # gen_fio_conf 00:31:34.302 04:03:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:34.302 04:03:51 -- target/dif.sh@54 -- # local file 00:31:34.302 04:03:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.302 04:03:51 -- target/dif.sh@56 -- # cat 00:31:34.302 04:03:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:34.302 04:03:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.302 04:03:51 -- common/autotest_common.sh@1320 -- # shift 00:31:34.302 04:03:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:34.302 04:03:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.302 04:03:51 -- nvmf/common.sh@542 -- # cat 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:34.302 04:03:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:34.302 04:03:51 -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.302 04:03:51 -- target/dif.sh@73 -- # cat 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:34.302 04:03:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:34.302 04:03:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:34.302 { 00:31:34.302 "params": { 00:31:34.302 "name": "Nvme$subsystem", 00:31:34.302 "trtype": "$TEST_TRANSPORT", 00:31:34.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.302 "adrfam": "ipv4", 00:31:34.302 "trsvcid": "$NVMF_PORT", 00:31:34.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.302 "hdgst": ${hdgst:-false}, 00:31:34.302 "ddgst": ${ddgst:-false} 00:31:34.302 }, 00:31:34.302 "method": "bdev_nvme_attach_controller" 00:31:34.302 } 00:31:34.302 EOF 00:31:34.302 )") 00:31:34.302 04:03:51 -- nvmf/common.sh@542 -- # cat 00:31:34.302 04:03:51 -- target/dif.sh@72 -- # (( file++ )) 00:31:34.302 04:03:51 -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.302 04:03:51 -- nvmf/common.sh@544 -- # jq . 00:31:34.302 04:03:51 -- nvmf/common.sh@545 -- # IFS=, 00:31:34.302 04:03:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:34.302 "params": { 00:31:34.302 "name": "Nvme0", 00:31:34.302 "trtype": "tcp", 00:31:34.302 "traddr": "10.0.0.2", 00:31:34.302 "adrfam": "ipv4", 00:31:34.302 "trsvcid": "4420", 00:31:34.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:34.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:34.302 "hdgst": false, 00:31:34.302 "ddgst": false 00:31:34.302 }, 00:31:34.302 "method": "bdev_nvme_attach_controller" 00:31:34.302 },{ 00:31:34.302 "params": { 00:31:34.302 "name": "Nvme1", 00:31:34.302 "trtype": "tcp", 00:31:34.302 "traddr": "10.0.0.2", 00:31:34.302 "adrfam": "ipv4", 00:31:34.302 "trsvcid": "4420", 00:31:34.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:34.302 "hdgst": false, 00:31:34.302 "ddgst": false 00:31:34.302 }, 00:31:34.302 "method": "bdev_nvme_attach_controller" 00:31:34.302 }' 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:34.302 04:03:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:34.302 04:03:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:34.302 04:03:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:34.302 04:03:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:34.302 04:03:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:34.302 04:03:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.302 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:34.302 ... 00:31:34.302 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:34.302 ... 00:31:34.302 fio-3.35 00:31:34.302 Starting 4 threads 00:31:34.302 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.302 [2024-07-14 04:03:52.532039] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:34.302 [2024-07-14 04:03:52.532091] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:39.560 00:31:39.560 filename0: (groupid=0, jobs=1): err= 0: pid=2526855: Sun Jul 14 04:03:57 2024 00:31:39.560 read: IOPS=1885, BW=14.7MiB/s (15.4MB/s)(73.7MiB/5003msec) 00:31:39.560 slat (nsec): min=7187, max=54273, avg=12420.83, stdev=5554.28 00:31:39.560 clat (usec): min=1961, max=44843, avg=4203.59, stdev=1295.94 00:31:39.560 lat (usec): min=1975, max=44854, avg=4216.01, stdev=1295.87 00:31:39.560 clat percentiles (usec): 00:31:39.560 | 1.00th=[ 3130], 5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3884], 00:31:39.560 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:39.560 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4752], 95.00th=[ 5276], 00:31:39.560 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[44827], 00:31:39.560 | 99.99th=[44827] 00:31:39.560 bw ( KiB/s): min=13867, max=15872, per=24.88%, avg=15085.90, stdev=565.90, samples=10 00:31:39.560 iops : min= 1733, max= 1984, avg=1885.70, stdev=70.83, samples=10 00:31:39.560 lat (msec) : 2=0.01%, 4=33.44%, 10=66.47%, 50=0.08% 00:31:39.560 cpu : usr=93.52%, sys=5.98%, ctx=7, majf=0, minf=0 00:31:39.560 IO depths : 1=0.2%, 2=1.8%, 4=71.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.560 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.560 issued rwts: total=9435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.560 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.560 filename0: (groupid=0, jobs=1): err= 0: pid=2526856: Sun Jul 14 04:03:57 2024 00:31:39.560 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5002msec) 00:31:39.560 slat (nsec): min=7109, max=61329, avg=12012.30, stdev=5825.64 00:31:39.560 clat (usec): min=1771, max=8435, avg=4184.02, stdev=626.88 00:31:39.560 lat (usec): min=1779, max=8443, avg=4196.03, stdev=626.38 00:31:39.560 clat percentiles (usec): 00:31:39.560 | 1.00th=[ 2999], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3851], 00:31:39.560 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:39.560 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5735], 00:31:39.560 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7963], 00:31:39.560 | 99.99th=[ 8455] 00:31:39.560 bw ( KiB/s): min=14464, max=15904, per=25.00%, avg=15155.20, stdev=514.98, samples=10 00:31:39.560 iops : min= 1808, max= 1988, avg=1894.40, stdev=64.37, samples=10 00:31:39.560 lat (msec) : 2=0.06%, 4=35.20%, 10=64.74% 00:31:39.560 cpu : usr=93.68%, sys=5.84%, ctx=10, majf=0, minf=9 00:31:39.560 IO depths : 1=0.2%, 2=1.7%, 4=71.7%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.560 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.560 issued rwts: total=9480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.560 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=2526857: Sun Jul 14 04:03:57 2024 00:31:39.560 read: IOPS=1915, BW=15.0MiB/s (15.7MB/s)(74.8MiB/5001msec) 00:31:39.560 slat (nsec): min=7403, max=67432, avg=15099.40, stdev=7815.46 00:31:39.560 clat (usec): min=1610, max=7737, avg=4127.10, stdev=480.99 00:31:39.560 lat (usec): min=1624, max=7751, avg=4142.20, stdev=481.56 00:31:39.560 clat percentiles (usec): 00:31:39.560 | 1.00th=[ 2933], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3851], 00:31:39.560 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:39.560 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5014], 00:31:39.560 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 7111], 99.95th=[ 7570], 00:31:39.560 | 99.99th=[ 7767] 00:31:39.560 bw ( KiB/s): min=14544, max=15776, per=25.27%, avg=15318.30, stdev=363.36, samples=10 00:31:39.560 iops : min= 1818, max= 1972, avg=1914.70, stdev=45.32, samples=10 00:31:39.560 lat (msec) : 2=0.07%, 4=35.53%, 10=64.40% 00:31:39.560 cpu : usr=94.48%, sys=4.96%, ctx=9, majf=0, minf=9 00:31:39.560 IO depths : 1=1.2%, 2=5.6%, 4=67.0%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.560 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.560 issued rwts: total=9578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.560 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=2526858: Sun Jul 14 04:03:57 2024 00:31:39.560 read: IOPS=1883, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5004msec) 00:31:39.560 slat (usec): min=3, max=286, avg=11.85, stdev= 6.05 00:31:39.560 clat (usec): min=1101, max=7256, avg=4209.69, stdev=662.77 00:31:39.560 lat (usec): min=1128, max=7264, avg=4221.54, stdev=662.41 00:31:39.560 clat percentiles (usec): 00:31:39.560 | 1.00th=[ 2966], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3884], 00:31:39.560 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:39.560 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 5080], 95.00th=[ 5604], 00:31:39.560 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7111], 99.95th=[ 7111], 00:31:39.560 | 99.99th=[ 7242] 00:31:39.560 bw ( KiB/s): min=14320, max=15600, per=24.86%, avg=15072.00, stdev=400.46, samples=10 00:31:39.560 iops : min= 1790, max= 1950, avg=1884.00, stdev=50.06, samples=10 00:31:39.560 lat (msec) : 2=0.08%, 4=36.37%, 10=63.55% 00:31:39.560 cpu : usr=94.22%, sys=5.28%, ctx=7, majf=0, minf=2 00:31:39.561 IO depths : 1=0.2%, 2=2.8%, 4=69.5%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.561 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.561 issued rwts: total=9426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.561 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.561 00:31:39.561 Run status group 0 (all jobs): 00:31:39.561 READ: bw=59.2MiB/s (62.1MB/s), 14.7MiB/s-15.0MiB/s (15.4MB/s-15.7MB/s), io=296MiB (311MB), run=5001-5004msec 00:31:39.561 04:03:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:39.561 04:03:57 -- target/dif.sh@43 -- # local sub 00:31:39.561 04:03:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.561 04:03:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:39.561 04:03:57 -- target/dif.sh@36 -- # local sub_id=0 00:31:39.561 04:03:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:39.561 04:03:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 04:03:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 04:03:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:39.561 04:03:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 04:03:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 04:03:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.561 04:03:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:39.561 04:03:57 -- target/dif.sh@36 -- # local sub_id=1 00:31:39.561 04:03:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.561 04:03:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 04:03:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 04:03:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:39.561 04:03:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 04:03:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 00:31:39.561 real 0m24.591s 00:31:39.561 user 4m29.308s 00:31:39.561 sys 0m7.858s 00:31:39.561 04:03:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.561 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 ************************************ 00:31:39.561 END TEST fio_dif_rand_params 00:31:39.561 ************************************ 00:31:39.561 04:03:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:39.561 04:03:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:39.561 04:03:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:39.561 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 ************************************ 00:31:39.561 START TEST fio_dif_digest 00:31:39.561 ************************************ 00:31:39.561 04:03:57 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:39.561 04:03:57 -- target/dif.sh@123 -- # local NULL_DIF 00:31:39.561 04:03:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:39.561 04:03:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:39.561 04:03:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:39.561 04:03:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:39.561 04:03:57 -- target/dif.sh@127 -- # numjobs=3 00:31:39.561 04:03:57 -- target/dif.sh@127 -- # iodepth=3 00:31:39.561 04:03:57 -- target/dif.sh@127 -- # runtime=10 00:31:39.561 04:03:57 -- target/dif.sh@128 -- # hdgst=true 00:31:39.561 04:03:57 -- target/dif.sh@128 -- # ddgst=true 00:31:39.561 04:03:57 -- target/dif.sh@130 -- # create_subsystems 0 00:31:39.561 04:03:57 -- target/dif.sh@28 -- # local sub 00:31:39.561 04:03:57 -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.561 04:03:57 -- target/dif.sh@31 -- # create_subsystem 0 00:31:39.561 04:03:57 -- target/dif.sh@18 -- # local sub_id=0 00:31:39.561 04:03:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:39.561 04:03:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:57 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 bdev_null0 00:31:39.561 04:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 04:03:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:39.561 04:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 04:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 04:03:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:39.561 04:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 04:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 04:03:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:39.561 04:03:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.561 04:03:58 -- common/autotest_common.sh@10 -- # set +x 00:31:39.561 [2024-07-14 04:03:58.026703] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.561 04:03:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.561 04:03:58 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:39.561 04:03:58 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:39.561 04:03:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:39.561 04:03:58 -- nvmf/common.sh@520 -- # config=() 00:31:39.561 04:03:58 -- nvmf/common.sh@520 -- # local subsystem config 00:31:39.561 04:03:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.561 04:03:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:39.561 04:03:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:39.561 { 00:31:39.561 "params": { 00:31:39.561 "name": "Nvme$subsystem", 00:31:39.561 "trtype": "$TEST_TRANSPORT", 00:31:39.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.561 "adrfam": "ipv4", 00:31:39.561 "trsvcid": "$NVMF_PORT", 00:31:39.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.561 "hdgst": ${hdgst:-false}, 00:31:39.561 "ddgst": ${ddgst:-false} 00:31:39.561 }, 00:31:39.561 "method": "bdev_nvme_attach_controller" 00:31:39.561 } 00:31:39.561 EOF 00:31:39.561 )") 00:31:39.561 04:03:58 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.561 04:03:58 -- target/dif.sh@82 -- # gen_fio_conf 00:31:39.561 04:03:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:39.561 04:03:58 -- target/dif.sh@54 -- # local file 00:31:39.561 04:03:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.561 04:03:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:39.561 04:03:58 -- target/dif.sh@56 -- # cat 00:31:39.561 04:03:58 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.561 04:03:58 -- common/autotest_common.sh@1320 -- # shift 00:31:39.561 04:03:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:39.561 04:03:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.561 04:03:58 -- nvmf/common.sh@542 -- # cat 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.561 04:03:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:39.561 04:03:58 -- target/dif.sh@72 -- # (( file <= files )) 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:39.561 04:03:58 -- nvmf/common.sh@544 -- # jq . 00:31:39.561 04:03:58 -- nvmf/common.sh@545 -- # IFS=, 00:31:39.561 04:03:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:39.561 "params": { 00:31:39.561 "name": "Nvme0", 00:31:39.561 "trtype": "tcp", 00:31:39.561 "traddr": "10.0.0.2", 00:31:39.561 "adrfam": "ipv4", 00:31:39.561 "trsvcid": "4420", 00:31:39.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:39.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:39.561 "hdgst": true, 00:31:39.561 "ddgst": true 00:31:39.561 }, 00:31:39.561 "method": "bdev_nvme_attach_controller" 00:31:39.561 }' 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:39.561 04:03:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:39.561 04:03:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:39.561 04:03:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:39.561 04:03:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:39.561 04:03:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:39.561 04:03:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.561 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:39.561 ... 00:31:39.561 fio-3.35 00:31:39.561 Starting 3 threads 00:31:39.561 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.819 [2024-07-14 04:03:58.697010] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:39.819 [2024-07-14 04:03:58.697099] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:52.024 00:31:52.024 filename0: (groupid=0, jobs=1): err= 0: pid=2527626: Sun Jul 14 04:04:08 2024 00:31:52.024 read: IOPS=201, BW=25.1MiB/s (26.4MB/s)(252MiB/10007msec) 00:31:52.024 slat (nsec): min=4613, max=52742, avg=15457.73, stdev=5697.14 00:31:52.024 clat (usec): min=8332, max=58605, avg=14891.87, stdev=6657.94 00:31:52.024 lat (usec): min=8347, max=58617, avg=14907.33, stdev=6657.75 00:31:52.024 clat percentiles (usec): 00:31:52.024 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[11076], 20.00th=[12780], 00:31:52.024 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:31:52.024 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15795], 95.00th=[16581], 00:31:52.024 | 99.00th=[55313], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:31:52.024 | 99.99th=[58459] 00:31:52.024 bw ( KiB/s): min=22528, max=29184, per=33.42%, avg=25740.80, stdev=2043.74, samples=20 00:31:52.024 iops : min= 176, max= 228, avg=201.10, stdev=15.97, samples=20 00:31:52.024 lat (msec) : 10=3.03%, 20=94.29%, 50=0.20%, 100=2.48% 00:31:52.024 cpu : usr=91.01%, sys=8.44%, ctx=24, majf=0, minf=131 00:31:52.024 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.024 issued rwts: total=2013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.024 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.024 filename0: (groupid=0, jobs=1): err= 0: pid=2527627: Sun Jul 14 04:04:08 2024 00:31:52.024 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(240MiB/10005msec) 00:31:52.024 slat (nsec): min=4896, max=58406, avg=13723.22, stdev=3872.01 00:31:52.024 clat (usec): min=7487, max=59497, avg=15647.40, stdev=7464.65 00:31:52.024 lat (usec): min=7499, max=59510, avg=15661.12, stdev=7464.90 00:31:52.024 clat percentiles (usec): 00:31:52.024 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[11863], 20.00th=[13304], 00:31:52.024 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:31:52.024 | 70.00th=[15401], 80.00th=[15926], 90.00th=[16712], 95.00th=[17695], 00:31:52.024 | 99.00th=[56886], 99.50th=[57410], 99.90th=[58983], 99.95th=[59507], 00:31:52.024 | 99.99th=[59507] 00:31:52.024 bw ( KiB/s): min=19200, max=30208, per=31.79%, avg=24488.70, stdev=2861.90, samples=20 00:31:52.024 iops : min= 150, max= 236, avg=191.30, stdev=22.37, samples=20 00:31:52.024 lat (msec) : 10=3.34%, 20=93.48%, 50=0.16%, 100=3.03% 00:31:52.024 cpu : usr=91.05%, sys=8.45%, ctx=17, majf=0, minf=159 00:31:52.024 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.024 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.024 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.024 filename0: (groupid=0, jobs=1): err= 0: pid=2527628: Sun Jul 14 04:04:08 2024 00:31:52.024 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(264MiB/10044msec) 00:31:52.024 slat (nsec): min=4638, max=56447, avg=14085.32, stdev=3911.66 00:31:52.024 clat (usec): min=6757, max=58757, avg=14208.94, stdev=4174.91 00:31:52.024 lat (usec): min=6770, max=58770, avg=14223.03, stdev=4174.75 00:31:52.024 clat percentiles (usec): 00:31:52.024 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[12387], 00:31:52.024 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:31:52.024 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:31:52.024 | 99.00th=[18220], 99.50th=[53216], 99.90th=[57934], 99.95th=[57934], 00:31:52.024 | 99.99th=[58983] 00:31:52.024 bw ( KiB/s): min=23296, max=29952, per=35.11%, avg=27046.40, stdev=1919.82, samples=20 00:31:52.024 iops : min= 182, max= 234, avg=211.30, stdev=15.00, samples=20 00:31:52.024 lat (msec) : 10=5.82%, 20=93.24%, 50=0.19%, 100=0.76% 00:31:52.024 cpu : usr=90.40%, sys=9.07%, ctx=23, majf=0, minf=126 00:31:52.024 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.024 issued rwts: total=2115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.024 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.024 00:31:52.024 Run status group 0 (all jobs): 00:31:52.024 READ: bw=75.2MiB/s (78.9MB/s), 23.9MiB/s-26.3MiB/s (25.1MB/s-27.6MB/s), io=756MiB (792MB), run=10005-10044msec 00:31:52.024 04:04:09 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:52.024 04:04:09 -- target/dif.sh@43 -- # local sub 00:31:52.024 04:04:09 -- target/dif.sh@45 -- # for sub in "$@" 00:31:52.024 04:04:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:52.024 04:04:09 -- target/dif.sh@36 -- # local sub_id=0 00:31:52.024 04:04:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.025 04:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:52.025 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:31:52.025 04:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.025 04:04:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:52.025 04:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:52.025 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:31:52.025 04:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.025 00:31:52.025 real 0m11.159s 00:31:52.025 user 0m28.435s 00:31:52.025 sys 0m2.894s 00:31:52.025 04:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:52.025 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:31:52.025 ************************************ 00:31:52.025 END TEST fio_dif_digest 00:31:52.025 ************************************ 00:31:52.025 04:04:09 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:52.025 04:04:09 -- target/dif.sh@147 -- # nvmftestfini 00:31:52.025 04:04:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:52.025 04:04:09 -- nvmf/common.sh@116 -- # sync 00:31:52.025 04:04:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:52.025 04:04:09 -- nvmf/common.sh@119 -- # set +e 00:31:52.025 04:04:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:52.025 04:04:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:52.025 rmmod nvme_tcp 00:31:52.025 rmmod nvme_fabrics 00:31:52.025 rmmod nvme_keyring 00:31:52.025 04:04:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:52.025 04:04:09 -- nvmf/common.sh@123 -- # set -e 00:31:52.025 04:04:09 -- nvmf/common.sh@124 -- # return 0 00:31:52.025 04:04:09 -- nvmf/common.sh@477 -- # '[' -n 2521269 ']' 00:31:52.025 04:04:09 -- nvmf/common.sh@478 -- # killprocess 2521269 00:31:52.025 04:04:09 -- common/autotest_common.sh@926 -- # '[' -z 2521269 ']' 00:31:52.025 04:04:09 -- common/autotest_common.sh@930 -- # kill -0 2521269 00:31:52.025 04:04:09 -- common/autotest_common.sh@931 -- # uname 00:31:52.025 04:04:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:52.025 04:04:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2521269 00:31:52.025 04:04:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:52.025 04:04:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:52.025 04:04:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2521269' 00:31:52.025 killing process with pid 2521269 00:31:52.025 04:04:09 -- common/autotest_common.sh@945 -- # kill 2521269 00:31:52.025 04:04:09 -- common/autotest_common.sh@950 -- # wait 2521269 00:31:52.025 04:04:09 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:52.025 04:04:09 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:52.025 Waiting for block devices as requested 00:31:52.025 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:52.025 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:52.025 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:52.025 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.283 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.283 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:52.283 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:52.283 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:52.541 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:52.541 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:52.541 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:52.541 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.800 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.800 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:52.800 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:53.057 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:53.057 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:53.057 04:04:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:53.057 04:04:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:53.057 04:04:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.057 04:04:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:53.057 04:04:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.057 04:04:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:53.057 04:04:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.586 04:04:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:55.586 00:31:55.586 real 1m7.391s 00:31:55.586 user 6m26.666s 00:31:55.586 sys 0m19.511s 00:31:55.586 04:04:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.586 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:31:55.586 ************************************ 00:31:55.586 END TEST nvmf_dif 00:31:55.586 ************************************ 00:31:55.586 04:04:13 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.586 04:04:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:55.586 04:04:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:55.586 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:31:55.586 ************************************ 00:31:55.586 START TEST nvmf_abort_qd_sizes 00:31:55.586 ************************************ 00:31:55.586 04:04:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.586 * Looking for test storage... 00:31:55.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.586 04:04:14 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.586 04:04:14 -- nvmf/common.sh@7 -- # uname -s 00:31:55.586 04:04:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.586 04:04:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.586 04:04:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.586 04:04:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.586 04:04:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.586 04:04:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.586 04:04:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.586 04:04:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.586 04:04:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.586 04:04:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.586 04:04:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.586 04:04:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.586 04:04:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.586 04:04:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.586 04:04:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.586 04:04:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.586 04:04:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.586 04:04:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.586 04:04:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.586 04:04:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.586 04:04:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.586 04:04:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.586 04:04:14 -- paths/export.sh@5 -- # export PATH 00:31:55.586 04:04:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.586 04:04:14 -- nvmf/common.sh@46 -- # : 0 00:31:55.586 04:04:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:55.586 04:04:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:55.586 04:04:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:55.586 04:04:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.586 04:04:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.586 04:04:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:55.586 04:04:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:55.586 04:04:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:55.586 04:04:14 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:31:55.586 04:04:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:55.586 04:04:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.586 04:04:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:55.586 04:04:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:55.586 04:04:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:55.586 04:04:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.586 04:04:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.586 04:04:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.586 04:04:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:55.586 04:04:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:55.586 04:04:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:55.586 04:04:14 -- common/autotest_common.sh@10 -- # set +x 00:31:56.962 04:04:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:56.962 04:04:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:56.962 04:04:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:56.962 04:04:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:56.962 04:04:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:56.962 04:04:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:56.962 04:04:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:56.962 04:04:15 -- nvmf/common.sh@294 -- # net_devs=() 00:31:56.962 04:04:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:56.962 04:04:15 -- nvmf/common.sh@295 -- # e810=() 00:31:56.962 04:04:15 -- nvmf/common.sh@295 -- # local -ga e810 00:31:56.962 04:04:15 -- nvmf/common.sh@296 -- # x722=() 00:31:56.962 04:04:15 -- nvmf/common.sh@296 -- # local -ga x722 00:31:56.962 04:04:15 -- nvmf/common.sh@297 -- # mlx=() 00:31:56.962 04:04:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:56.962 04:04:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.962 04:04:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:56.962 04:04:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:56.962 04:04:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:56.962 04:04:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:56.962 04:04:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:56.962 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:56.962 04:04:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:56.962 04:04:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:56.962 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:56.962 04:04:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:56.962 04:04:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:56.962 04:04:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.962 04:04:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:56.962 04:04:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.962 04:04:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:56.962 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:56.962 04:04:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.962 04:04:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:56.962 04:04:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.962 04:04:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:56.962 04:04:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.962 04:04:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:56.962 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:56.962 04:04:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.962 04:04:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:56.962 04:04:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:56.962 04:04:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:56.962 04:04:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:56.962 04:04:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.962 04:04:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.962 04:04:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.962 04:04:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:56.962 04:04:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.962 04:04:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.962 04:04:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:56.962 04:04:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.962 04:04:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.962 04:04:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:56.962 04:04:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:56.962 04:04:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.962 04:04:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.254 04:04:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.254 04:04:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.254 04:04:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:57.254 04:04:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.254 04:04:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.254 04:04:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.254 04:04:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:57.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:31:57.255 00:31:57.255 --- 10.0.0.2 ping statistics --- 00:31:57.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.255 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:31:57.255 04:04:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:31:57.255 00:31:57.255 --- 10.0.0.1 ping statistics --- 00:31:57.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.255 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:57.255 04:04:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.255 04:04:15 -- nvmf/common.sh@410 -- # return 0 00:31:57.255 04:04:15 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:57.255 04:04:15 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:58.203 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:58.203 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:58.203 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:58.203 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:58.203 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:58.203 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:58.203 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:58.203 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:58.203 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:58.203 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:58.462 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:58.462 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:58.462 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:58.462 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:58.462 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:58.462 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:59.401 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:59.401 04:04:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.401 04:04:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:59.401 04:04:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:59.401 04:04:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.401 04:04:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:59.401 04:04:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:59.401 04:04:18 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:31:59.401 04:04:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:59.401 04:04:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:59.401 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:31:59.401 04:04:18 -- nvmf/common.sh@469 -- # nvmfpid=2532522 00:31:59.401 04:04:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:59.401 04:04:18 -- nvmf/common.sh@470 -- # waitforlisten 2532522 00:31:59.401 04:04:18 -- common/autotest_common.sh@819 -- # '[' -z 2532522 ']' 00:31:59.401 04:04:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.401 04:04:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:59.401 04:04:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.401 04:04:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:59.401 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:31:59.401 [2024-07-14 04:04:18.312450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:59.401 [2024-07-14 04:04:18.312534] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.659 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.659 [2024-07-14 04:04:18.382875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:59.659 [2024-07-14 04:04:18.474572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:59.659 [2024-07-14 04:04:18.474732] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.659 [2024-07-14 04:04:18.474748] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.659 [2024-07-14 04:04:18.474760] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.659 [2024-07-14 04:04:18.476890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.659 [2024-07-14 04:04:18.476940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.659 [2024-07-14 04:04:18.477025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.659 [2024-07-14 04:04:18.477029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.598 04:04:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:00.598 04:04:19 -- common/autotest_common.sh@852 -- # return 0 00:32:00.598 04:04:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:00.598 04:04:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:00.598 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:32:00.598 04:04:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:00.598 04:04:19 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:00.598 04:04:19 -- scripts/common.sh@312 -- # local nvmes 00:32:00.598 04:04:19 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:32:00.598 04:04:19 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:00.598 04:04:19 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:00.598 04:04:19 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:00.598 04:04:19 -- scripts/common.sh@322 -- # uname -s 00:32:00.598 04:04:19 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:00.598 04:04:19 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:00.598 04:04:19 -- scripts/common.sh@327 -- # (( 1 )) 00:32:00.598 04:04:19 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:00.598 04:04:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:00.598 04:04:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.598 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:32:00.598 ************************************ 00:32:00.598 START TEST spdk_target_abort 00:32:00.598 ************************************ 00:32:00.598 04:04:19 -- common/autotest_common.sh@1104 -- # spdk_target 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:00.598 04:04:19 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:00.598 04:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.598 04:04:19 -- common/autotest_common.sh@10 -- # set +x 00:32:03.888 spdk_targetn1 00:32:03.889 04:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:03.889 04:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.889 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.889 [2024-07-14 04:04:22.155748] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.889 04:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:03.889 04:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.889 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.889 04:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:03.889 04:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.889 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.889 04:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:03.889 04:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.889 04:04:22 -- common/autotest_common.sh@10 -- # set +x 00:32:03.889 [2024-07-14 04:04:22.188024] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.889 04:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:03.889 04:04:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:03.889 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.426 Initializing NVMe Controllers 00:32:06.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:06.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:06.426 Initialization complete. Launching workers. 00:32:06.426 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10015, failed: 0 00:32:06.426 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1236, failed to submit 8779 00:32:06.426 success 849, unsuccess 387, failed 0 00:32:06.426 04:04:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:06.426 04:04:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:06.684 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.967 Initializing NVMe Controllers 00:32:09.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:09.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:09.967 Initialization complete. Launching workers. 00:32:09.967 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8626, failed: 0 00:32:09.967 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1244, failed to submit 7382 00:32:09.967 success 314, unsuccess 930, failed 0 00:32:09.967 04:04:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:09.967 04:04:28 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:09.967 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.256 Initializing NVMe Controllers 00:32:13.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:13.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:13.256 Initialization complete. Launching workers. 00:32:13.256 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31793, failed: 0 00:32:13.256 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2569, failed to submit 29224 00:32:13.256 success 553, unsuccess 2016, failed 0 00:32:13.256 04:04:31 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:13.256 04:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.256 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:32:13.256 04:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.256 04:04:31 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:13.256 04:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.256 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:32:14.634 04:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.634 04:04:33 -- target/abort_qd_sizes.sh@62 -- # killprocess 2532522 00:32:14.634 04:04:33 -- common/autotest_common.sh@926 -- # '[' -z 2532522 ']' 00:32:14.634 04:04:33 -- common/autotest_common.sh@930 -- # kill -0 2532522 00:32:14.634 04:04:33 -- common/autotest_common.sh@931 -- # uname 00:32:14.634 04:04:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:14.634 04:04:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2532522 00:32:14.634 04:04:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:14.634 04:04:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:14.634 04:04:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2532522' 00:32:14.634 killing process with pid 2532522 00:32:14.634 04:04:33 -- common/autotest_common.sh@945 -- # kill 2532522 00:32:14.634 04:04:33 -- common/autotest_common.sh@950 -- # wait 2532522 00:32:14.634 00:32:14.634 real 0m14.238s 00:32:14.634 user 0m55.999s 00:32:14.635 sys 0m2.911s 00:32:14.635 04:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.635 04:04:33 -- common/autotest_common.sh@10 -- # set +x 00:32:14.635 ************************************ 00:32:14.635 END TEST spdk_target_abort 00:32:14.635 ************************************ 00:32:14.892 04:04:33 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:14.892 04:04:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:14.892 04:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:14.892 04:04:33 -- common/autotest_common.sh@10 -- # set +x 00:32:14.892 ************************************ 00:32:14.892 START TEST kernel_target_abort 00:32:14.892 ************************************ 00:32:14.892 04:04:33 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:14.892 04:04:33 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:14.892 04:04:33 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:14.892 04:04:33 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:14.892 04:04:33 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:14.892 04:04:33 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:14.892 04:04:33 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:14.892 04:04:33 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:14.892 04:04:33 -- nvmf/common.sh@627 -- # local block nvme 00:32:14.892 04:04:33 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:14.892 04:04:33 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:14.892 04:04:33 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:14.892 04:04:33 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:15.827 Waiting for block devices as requested 00:32:15.827 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:16.087 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.087 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.087 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.346 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:16.346 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:16.346 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:16.346 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:16.603 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:16.603 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.603 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.604 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.861 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:16.862 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:16.862 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:16.862 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:16.862 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.121 04:04:35 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:17.121 04:04:35 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:17.121 04:04:35 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:17.121 04:04:35 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:17.121 04:04:35 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:17.121 No valid GPT data, bailing 00:32:17.121 04:04:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:17.121 04:04:35 -- scripts/common.sh@393 -- # pt= 00:32:17.121 04:04:35 -- scripts/common.sh@394 -- # return 1 00:32:17.121 04:04:35 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:17.121 04:04:35 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:17.121 04:04:35 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:17.121 04:04:35 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:17.121 04:04:35 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:17.121 04:04:35 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:17.121 04:04:35 -- nvmf/common.sh@654 -- # echo 1 00:32:17.121 04:04:35 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:17.121 04:04:35 -- nvmf/common.sh@656 -- # echo 1 00:32:17.121 04:04:35 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:17.121 04:04:35 -- nvmf/common.sh@663 -- # echo tcp 00:32:17.121 04:04:35 -- nvmf/common.sh@664 -- # echo 4420 00:32:17.121 04:04:35 -- nvmf/common.sh@665 -- # echo ipv4 00:32:17.121 04:04:35 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:17.121 04:04:35 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:17.121 00:32:17.121 Discovery Log Number of Records 2, Generation counter 2 00:32:17.121 =====Discovery Log Entry 0====== 00:32:17.121 trtype: tcp 00:32:17.121 adrfam: ipv4 00:32:17.121 subtype: current discovery subsystem 00:32:17.121 treq: not specified, sq flow control disable supported 00:32:17.121 portid: 1 00:32:17.121 trsvcid: 4420 00:32:17.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:17.121 traddr: 10.0.0.1 00:32:17.121 eflags: none 00:32:17.121 sectype: none 00:32:17.121 =====Discovery Log Entry 1====== 00:32:17.121 trtype: tcp 00:32:17.121 adrfam: ipv4 00:32:17.121 subtype: nvme subsystem 00:32:17.121 treq: not specified, sq flow control disable supported 00:32:17.121 portid: 1 00:32:17.121 trsvcid: 4420 00:32:17.121 subnqn: kernel_target 00:32:17.121 traddr: 10.0.0.1 00:32:17.121 eflags: none 00:32:17.121 sectype: none 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.121 04:04:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.122 04:04:36 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:17.122 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.430 Initializing NVMe Controllers 00:32:20.430 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:20.430 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:20.430 Initialization complete. Launching workers. 00:32:20.430 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 27684, failed: 0 00:32:20.430 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27684, failed to submit 0 00:32:20.430 success 0, unsuccess 27684, failed 0 00:32:20.431 04:04:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:20.431 04:04:39 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:20.431 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.719 Initializing NVMe Controllers 00:32:23.719 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:23.719 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:23.719 Initialization complete. Launching workers. 00:32:23.719 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 57095, failed: 0 00:32:23.719 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14354, failed to submit 42741 00:32:23.719 success 0, unsuccess 14354, failed 0 00:32:23.719 04:04:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:23.719 04:04:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:23.719 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.005 Initializing NVMe Controllers 00:32:27.005 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:27.005 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:27.005 Initialization complete. Launching workers. 00:32:27.005 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 55712, failed: 0 00:32:27.005 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 13886, failed to submit 41826 00:32:27.005 success 0, unsuccess 13886, failed 0 00:32:27.005 04:04:45 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:27.005 04:04:45 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:27.005 04:04:45 -- nvmf/common.sh@677 -- # echo 0 00:32:27.005 04:04:45 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:27.005 04:04:45 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:27.005 04:04:45 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:27.005 04:04:45 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:27.005 04:04:45 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:27.005 04:04:45 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:27.005 00:32:27.005 real 0m11.826s 00:32:27.005 user 0m4.040s 00:32:27.005 sys 0m2.490s 00:32:27.005 04:04:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.005 04:04:45 -- common/autotest_common.sh@10 -- # set +x 00:32:27.005 ************************************ 00:32:27.005 END TEST kernel_target_abort 00:32:27.005 ************************************ 00:32:27.005 04:04:45 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:27.005 04:04:45 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:27.005 04:04:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:27.005 04:04:45 -- nvmf/common.sh@116 -- # sync 00:32:27.005 04:04:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:27.005 04:04:45 -- nvmf/common.sh@119 -- # set +e 00:32:27.005 04:04:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:27.005 04:04:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:27.005 rmmod nvme_tcp 00:32:27.005 rmmod nvme_fabrics 00:32:27.005 rmmod nvme_keyring 00:32:27.005 04:04:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:27.005 04:04:45 -- nvmf/common.sh@123 -- # set -e 00:32:27.005 04:04:45 -- nvmf/common.sh@124 -- # return 0 00:32:27.005 04:04:45 -- nvmf/common.sh@477 -- # '[' -n 2532522 ']' 00:32:27.005 04:04:45 -- nvmf/common.sh@478 -- # killprocess 2532522 00:32:27.005 04:04:45 -- common/autotest_common.sh@926 -- # '[' -z 2532522 ']' 00:32:27.005 04:04:45 -- common/autotest_common.sh@930 -- # kill -0 2532522 00:32:27.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2532522) - No such process 00:32:27.006 04:04:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2532522 is not found' 00:32:27.006 Process with pid 2532522 is not found 00:32:27.006 04:04:45 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:27.006 04:04:45 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:27.942 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:27.942 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:27.942 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:27.942 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:27.942 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:27.942 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:27.942 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:27.942 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:27.942 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:27.942 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:27.942 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:27.942 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:27.942 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:27.942 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:27.942 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:27.942 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:27.942 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:28.201 04:04:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:28.201 04:04:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:28.201 04:04:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:28.201 04:04:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:28.201 04:04:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.201 04:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:28.201 04:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.104 04:04:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:30.104 00:32:30.104 real 0m35.009s 00:32:30.104 user 1m2.351s 00:32:30.104 sys 0m8.597s 00:32:30.104 04:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:30.104 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:32:30.104 ************************************ 00:32:30.104 END TEST nvmf_abort_qd_sizes 00:32:30.104 ************************************ 00:32:30.104 04:04:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:30.104 04:04:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:30.104 04:04:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:30.104 04:04:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:30.104 04:04:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:30.104 04:04:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:30.104 04:04:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:30.105 04:04:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:30.105 04:04:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:30.105 04:04:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:30.105 04:04:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:30.105 04:04:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:30.105 04:04:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:30.105 04:04:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:30.105 04:04:49 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:30.105 04:04:49 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:30.105 04:04:49 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:30.105 04:04:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:30.105 04:04:49 -- common/autotest_common.sh@10 -- # set +x 00:32:30.105 04:04:49 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:30.105 04:04:49 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:30.105 04:04:49 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:30.105 04:04:49 -- common/autotest_common.sh@10 -- # set +x 00:32:32.006 INFO: APP EXITING 00:32:32.006 INFO: killing all VMs 00:32:32.006 INFO: killing vhost app 00:32:32.006 INFO: EXIT DONE 00:32:32.939 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:32.939 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:33.197 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:33.197 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:33.197 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:33.197 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:33.197 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:33.198 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:33.198 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:33.198 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:33.198 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:33.198 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:33.198 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:33.198 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:33.198 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:33.198 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:33.198 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:34.571 Cleaning 00:32:34.571 Removing: /var/run/dpdk/spdk0/config 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:34.571 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:34.571 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:34.571 Removing: /var/run/dpdk/spdk1/config 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:34.571 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:34.571 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:34.571 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:34.571 Removing: /var/run/dpdk/spdk2/config 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:34.571 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:34.571 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:34.571 Removing: /var/run/dpdk/spdk3/config 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:34.571 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:34.571 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:34.571 Removing: /var/run/dpdk/spdk4/config 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:34.571 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:34.571 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:34.571 Removing: /dev/shm/bdev_svc_trace.1 00:32:34.571 Removing: /dev/shm/nvmf_trace.0 00:32:34.571 Removing: /dev/shm/spdk_tgt_trace.pid2257268 00:32:34.571 Removing: /var/run/dpdk/spdk0 00:32:34.572 Removing: /var/run/dpdk/spdk1 00:32:34.572 Removing: /var/run/dpdk/spdk2 00:32:34.572 Removing: /var/run/dpdk/spdk3 00:32:34.572 Removing: /var/run/dpdk/spdk4 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2255571 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2256319 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2257268 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2257749 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2258973 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2259916 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2260200 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2260423 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2260751 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2260952 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2261111 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2261272 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2261451 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2262046 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2264579 00:32:34.572 Removing: /var/run/dpdk/spdk_pid2264769 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2265069 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2265118 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2265522 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2265656 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2265970 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2266112 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2266411 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2266551 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2266718 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2266855 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2267232 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2267466 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2267694 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2267900 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2268015 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2268081 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2268314 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2268746 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2269147 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2269308 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2269500 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2269731 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2269869 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2270033 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2270185 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2270453 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2270597 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2270752 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2270903 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2271179 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2271320 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2271479 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2271621 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2271906 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2272047 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2272201 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2272347 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2272629 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2272778 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2272931 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2273084 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2273355 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2273498 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2273658 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2273807 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2274072 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2274224 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2274379 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2274526 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2274803 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2274955 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2275120 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2275258 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2275538 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2275686 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2275849 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2276027 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2276230 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2278424 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2333901 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2336561 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2343537 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2346891 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2349397 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2349872 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2353706 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2353821 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2354375 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2355053 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2355726 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2356145 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2356147 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2356304 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2356428 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2356434 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2357113 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2357825 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2358453 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2359370 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2359497 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2359638 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2360681 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2361555 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2367108 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2367331 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2370006 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2373774 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2376005 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2382506 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2387904 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2389258 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2389940 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2401046 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2403290 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2406112 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2407335 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2408702 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2408972 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2409127 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2409274 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2409872 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2411354 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2412249 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2412701 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2416320 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2419766 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2423439 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2447401 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2450168 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2454192 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2455673 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2456925 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2459503 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2461963 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2466279 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2466286 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2469217 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2469360 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2469503 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2469824 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2469902 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2470884 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2472224 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2473456 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2474667 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2475884 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2477098 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2480984 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2481324 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2482757 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2483474 00:32:34.829 Removing: /var/run/dpdk/spdk_pid2487898 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2489965 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2493498 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2497199 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2500744 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2501167 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2501704 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2502126 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2502663 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2503147 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2503698 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2504253 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2506910 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2507060 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2510920 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2511100 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2512745 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2517996 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2518001 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2521449 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2522889 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2524341 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2525229 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2526672 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2527557 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2532958 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2533362 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2533764 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2535244 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2535657 00:32:35.087 Removing: /var/run/dpdk/spdk_pid2536062 00:32:35.087 Clean 00:32:35.087 killing process with pid 2227888 00:32:43.200 killing process with pid 2227883 00:32:43.200 killing process with pid 2227886 00:32:43.200 killing process with pid 2227884 00:32:43.200 04:05:02 -- common/autotest_common.sh@1436 -- # return 0 00:32:43.200 04:05:02 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:43.200 04:05:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.200 04:05:02 -- common/autotest_common.sh@10 -- # set +x 00:32:43.200 04:05:02 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:43.200 04:05:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.200 04:05:02 -- common/autotest_common.sh@10 -- # set +x 00:32:43.467 04:05:02 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:43.467 04:05:02 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:43.467 04:05:02 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:43.467 04:05:02 -- spdk/autotest.sh@394 -- # hash lcov 00:32:43.467 04:05:02 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:43.467 04:05:02 -- spdk/autotest.sh@396 -- # hostname 00:32:43.467 04:05:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:43.467 geninfo: WARNING: invalid characters removed from testname! 00:33:10.066 04:05:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:13.351 04:05:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:15.881 04:05:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:18.415 04:05:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:21.706 04:05:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:24.292 04:05:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:26.862 04:05:45 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:26.862 04:05:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.862 04:05:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:26.862 04:05:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.862 04:05:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.862 04:05:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.862 04:05:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.862 04:05:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.862 04:05:45 -- paths/export.sh@5 -- $ export PATH 00:33:26.862 04:05:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.862 04:05:45 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:26.862 04:05:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:26.862 04:05:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720922745.XXXXXX 00:33:26.862 04:05:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720922745.JJvaJZ 00:33:26.862 04:05:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:26.862 04:05:45 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:33:26.862 04:05:45 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:26.862 04:05:45 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:26.862 04:05:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:26.862 04:05:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:26.862 04:05:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:26.862 04:05:45 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:26.862 04:05:45 -- common/autotest_common.sh@10 -- $ set +x 00:33:26.862 04:05:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:26.862 04:05:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:26.862 04:05:45 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:26.862 04:05:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:26.862 04:05:45 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:26.862 04:05:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:26.862 04:05:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:26.862 04:05:45 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:26.862 04:05:45 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:26.862 04:05:45 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:26.862 04:05:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:26.862 + [[ -n 2172907 ]] 00:33:26.862 + sudo kill 2172907 00:33:26.872 [Pipeline] } 00:33:26.890 [Pipeline] // stage 00:33:26.895 [Pipeline] } 00:33:26.911 [Pipeline] // timeout 00:33:26.916 [Pipeline] } 00:33:26.932 [Pipeline] // catchError 00:33:26.937 [Pipeline] } 00:33:26.954 [Pipeline] // wrap 00:33:26.960 [Pipeline] } 00:33:26.975 [Pipeline] // catchError 00:33:26.983 [Pipeline] stage 00:33:26.985 [Pipeline] { (Epilogue) 00:33:26.999 [Pipeline] catchError 00:33:27.000 [Pipeline] { 00:33:27.014 [Pipeline] echo 00:33:27.016 Cleanup processes 00:33:27.021 [Pipeline] sh 00:33:27.305 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:27.305 2548198 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:27.319 [Pipeline] sh 00:33:27.601 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:27.602 ++ grep -v 'sudo pgrep' 00:33:27.602 ++ awk '{print $1}' 00:33:27.602 + sudo kill -9 00:33:27.602 + true 00:33:27.612 [Pipeline] sh 00:33:27.895 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:37.894 [Pipeline] sh 00:33:38.179 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:38.179 Artifacts sizes are good 00:33:38.195 [Pipeline] archiveArtifacts 00:33:38.203 Archiving artifacts 00:33:38.430 [Pipeline] sh 00:33:38.709 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:38.725 [Pipeline] cleanWs 00:33:38.736 [WS-CLEANUP] Deleting project workspace... 00:33:38.736 [WS-CLEANUP] Deferred wipeout is used... 00:33:38.743 [WS-CLEANUP] done 00:33:38.745 [Pipeline] } 00:33:38.761 [Pipeline] // catchError 00:33:38.774 [Pipeline] sh 00:33:39.055 + logger -p user.info -t JENKINS-CI 00:33:39.064 [Pipeline] } 00:33:39.083 [Pipeline] // stage 00:33:39.089 [Pipeline] } 00:33:39.109 [Pipeline] // node 00:33:39.117 [Pipeline] End of Pipeline 00:33:39.155 Finished: SUCCESS